error “Kernel panic - not syncing - Attempted to kill init”, how to solve? [closed]
Windows Hardware Submission Update Process is not working right now
I was forwarded by your support to reach out to you via stackoverflow.
We have successfully signed our driver using the hardware submission dashboard and now we are preparing the first update to the driver.
You recommend updating the driver packages by Downloading a DUA shell and create a partial package to sign. However, that process isn't working.
- https://docs.microsoft.com/en-us/windows-hardware/drivers/dashboard/manage-your-hardware-submissions
I am following the advises to download the DUA Update Package and then I am trying to create a driver only update package as described on this page:
However, when trying to import the package that the hardware submission platform is generating, the HLK Studio let me not open it and exits with a Error that the package file cannot be opened.
Further analysis shows, that the downloaded package file has a file version of 3.8.0 and when I try to merge this package against other packages the HLK Studio is showing me that it does not support version 3.8. The latest supported version is 3.7.
So, I am stuck at this point. I've followed all advises but for some odd reason the update process is just not working.
I've downloaded the latest HLK Studio so it (should?) be up to date but for some reason the generated file through the Dashboard is still newer and cannot be opened.
Therefore, I cannot create a driver update package.
The only workaround I have right now is submitting a new driver each time but I guess that's not the "correct" way to handle driver updates.
The HLK Studio shows the following version options:
- Controller Version: 10.1.18362.18362
- Studio Version: 10.0.18362.1
The Controller is installed on a Windows Server 2016 system. The tests are passing just fine and everything works, except importing the driver update package file that I can download from the hardware submission dashboard.
Is there any known issue regarding this or is there an answer available how to fix this?
Can I stop a Jupyterlab Code Console being auto created or auto-set Preferred Kernel for it?
apologies if this is a dumb question but I am new to Jupyterlab. I have noticed that whenever I start up Jupyterlab, when it opens up in my browser, it creates a new Code Console and always displays a ‘Select Kernel’ popup which asks me to ‘Select kernel for: “Console 1”’ with ‘Python 3’ selected as the default. I then have to click the Select button to proceed. This is quite annoying and I was wondering if I can either disable this new Code Console from being created each time or just get it to default to Python 3 without asking me.
Thanks in advance for any replies.
USB Kernel Module not probing
I've written a simple usb device driver but it's not getting probed when the device is connected to the system, instead the kernel is calling the usb-storage module instead.
Code for my kernel module:
#include <linux/module.h>
#include <linux/usb.h>
#include <linux/device.h>
#define USB_IT8951_VENDOR_ID 0x048d //ITE Vendor ID
#define USB_IT8951_PRODUCT_ID 0x0220
static int it8951_usb_probe(struct usb_interface *interface, const struct usb_device_id *id)
{
pr_info("test_string 2\n");
return 0;
}
static void it8951_usb_disconnect(struct usb_interface *interface)
{
pr_info("Disconnect enter\n");
}
static const struct usb_device_id it8951_usb_devices [] = {
{ USB_DEVICE(USB_IT8951_VENDOR_ID, USB_IT8951_PRODUCT_ID) },
{},
};
MODULE_DEVICE_TABLE(usb, it8951_usb_devices);
static struct usb_driver it8951_usb_driver_struct = {
.name = "it8951_usb",
.probe = it8951_usb_probe,
.disconnect = it8951_usb_disconnect,
//.fops = &skel_fops,
.id_table = it8951_usb_devices,
};
static int __init it8951_usb_init(void)
{
int result;
pr_info("test_string 1\n");
result = usb_register(&it8951_usb_driver_struct);
if (result < 0) {
pr_err("usb_register failed for the "__FILE__ "driver.""Error number %d", result);
return -1;
}
return 0;
}
module_init(it8951_usb_init);
static void __exit it8951_usb_exit(void)
{
usb_deregister(&it8951_usb_driver_struct);
}
module_exit(it8951_usb_exit);
Dmesg when the device is connnected:
[ 668.940000] usb 1-2: new high-speed USB device number 3 using atmel-ehci
[ 669.130000] usb 1-2: New USB device found, idVendor=048d, idProduct=0220
[ 669.130000] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 669.140000] usb 1-2: Product: Digi-Photo-Frame
[ 669.140000] usb 1-2: Manufacturer: Smedia inc.
[ 669.140000] usb 1-2: SerialNumber: 14024689FA08
[ 669.160000] usb-storage 1-2:1.0: USB Mass Storage device detected
[ 669.170000] scsi host0: usb-storage 1-2:1.0
[ 670.240000] scsi 0:0:0:0: Direct-Access Generic Storage RamDisc 1.00 PQ: 0 ANSI: 0 CCS
[ 670.260000] sd 0:0:0:0: [sda] 1 512-byte logical blocks: (512 B/512 B)
[ 670.270000] sd 0:0:0:0: [sda] Write Protect is off
[ 670.270000] sd 0:0:0:0: [sda] Mode Sense: 03 00 00 00
[ 670.270000] sd 0:0:0:0: [sda] No Caching mode page found
[ 670.270000] sd 0:0:0:0: [sda] Assuming drive cache: write through
[ 670.320000] sd 0:0:0:0: [sda] Attached SCSI removable disk
I can't remove usb-storage support from the system, so is there a way to force the kernel to use my module instead of usb-storage? Or can I write a udev rule to re-assign the device?
Thanks in advance.
Question about C Syntax: Nested curly brackets for struct declaration mid-function?
Title may be a bit confusing, as I am having trouble describing it. I am sure this is already asked but I have no idea how to properly ask it and find it on the site.
Essentially, for C, I am looking through some kernel code and see that in some functions there is an additional 'nested' set of curly braces ('{}'), which always has a header comment of "TRACE" and contains info about a struct.
I am trying to figure out what this syntax is called and more about it in general. I would appreciate any and all help. Thanks so much! (Screen cap below)
CUDA Kernel is starting but not finishing [closed]
I have the following code:
__global__ void test_kernel (int n, int* array) {
printf("test_kernel\n");
//for(int i=0; i<n; i++) {
// array[i] = 0;
//}
array[blockIdx.x] = 0;
printf("\n");
printf("test_kernel done\n");
}
void test_wrapper(int n) {
printf("test_wrapper\n");
int array[n];
test_kernel<<<1, 1>>>(n, array);
cudaDeviceSynchronize();
}
Basically, I want to initialize the array to all zeroes (later, I want the kernel to fill it with different values, but just for starters). I've tried this two different ways, with a for loop like normal code or using blockIdx.x. Either way, when I add in one of these commands, "testkernel_done" doesn't print.
I thought cudaDeviceSynchronize would ensure my kernel finished. What exactly is going on here?
linux kernel panic unable to handle kernel NULL pointer dereference at
I'm facing issues with some kernel panic but I don't have any idea how to find which soft is exacly causing this issue. I'm trying to compile some soft on remote host using distcc software but my machines which are compiling are going down because of this issue.
Could you point me where shoud I start looking? What could cause this issue? Which tools should I use?
Here is kernel panic output:
[591792.656853] IP: [< (null)>] (null)
[591792.658710] PGD 800000032ca05067 PUD 327bc6067 PMD 0
[591792.660439] Oops: 0010 [#1] SMP
[591792.661562] Modules linked in: fuse nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache nls_utf8 isofs sunrpc dm_mirror dm_region_hash dm_log dm_mod sb_edac iosf_mbi kvm_intel ppdev kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd cirrus ttm joydev drm_kms_helper sg virtio_balloon syscopyarea sysfillrect sysimgblt fb_sys_fops drm parport_pc parport drm_panel_orientation_quirks pcspkr i2c_piix4 ip_tables xfs libcrc32c sr_mod cdrom virtio_blk virtio_net ata_generic pata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw floppy ata_piix libata virtio_pci virtio_ring virtio
[591792.682098] CPU: 2 PID: 25548 Comm: cc1plus Not tainted 3.10.0-957.el7.x86_64 #1
[591792.684495] Hardware name: Red Hat OpenStack Compute, BIOS 1.11.0-2.el7 04/01/2014
[591792.686923] task: ffff8ebb65ea1040 ti: ffff8ebb6b250000 task.ti: ffff8ebb6b250000
[591792.689315] RIP: 0010:[<0000000000000000>] [< (null)>] (null)
[591792.691729] RSP: 0018:ffff8ebb6b253da0 EFLAGS: 00010246
[591792.693438] RAX: 0000000000000000 RBX: ffff8ebb6b253e40 RCX: ffff8ebb6b253fd8
[591792.695716] RDX: ffff8ebb38098840 RSI: ffff8ebb6b253e40 RDI: ffff8ebb38098840
[591792.697992] RBP: ffff8ebb6b253e30 R08: 0000000000000100 R09: 0000000000000001
[591792.700271] R10: ffff8ebb7fd1f080 R11: ffffd7da0beb9380 R12: ffff8eb8417af000
[591792.702547] R13: ffff8eb875d1b000 R14: ffff8ebb6b253f24 R15: 0000000000000000
[591792.704821] FS: 0000000000000000(0000) GS:ffff8ebb7fd00000(0063) knlGS:00000000f7524740
[591792.707397] CS: 0010 DS: 002b ES: 002b CR0: 0000000080050033
[591792.709242] CR2: 0000000000000000 CR3: 000000032eb0a000 CR4: 00000000003607e0
[591792.711519] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[591792.713814] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[591792.716100] Call Trace:
[591792.716927] [<ffffffff9165270b>] ? path_openat+0x3eb/0x640
[591792.718727] [<ffffffff91653dfd>] do_filp_open+0x4d/0xb0
[591792.720451] [<ffffffff91661504>] ? __alloc_fd+0xc4/0x170
[591792.722267] [<ffffffff9163ff27>] do_sys_open+0x137/0x240
[591792.724017] [<ffffffff916a1fab>] compat_SyS_open+0x1b/0x20
[591792.725820] [<ffffffff91b78bb0>] sysenter_dispatch+0xd/0x2b
[591792.727648] Code: Bad RIP value.
[591792.728795] RIP [< (null)>] (null)
[591792.730486] RSP <ffff8ebb6b253da0>
[591792.731625] CR2: 0000000000000000
[591792.734935] ---[ end trace ccfdca9d4733e7a5 ]---
[591792.736450] Kernel panic - not syncing: Fatal exception
[591792.738708] Kernel Offset: 0x10400000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
What are the steps to create patchset in git
So I want to create a patchset - total 3 different patches for a code fix. Its a git
based project.
I have thought following steps-
- I am in
master
branch. Did bygit checkout master
Create 3 different branches -git branch First
,git branch Second
andgit branch Third
- Do changes (Code fix 1) in
First
branch. Then create patch 1 withmaster
andFirst
. - Do changes (Code fix 2) in
Second.
branch. Then create patch 2 withmaster
andSecond
. - And similarly for third fix.
It is important to note that all the code changes for all the 3 patches are in a single .c
file. Also, I cant make a single patch of all the code fix - I have to make 3 different patches - this is a requirement.
Actually the patches should be independent - patch 1 can be applied by developer 1 at some commit hash, patch 2 can be applied by another developer at another different commit hash - and similarly for dev 3.
I am confident that there is a way to create the 3 patches using only a single branch. Kindly illuminate.
Convert colours of every pixel in video preview - Swift
I have the following code which displays a camera preview, retrieves a single pixel's colour from the UIImage
and converts this value to a 'filtered' colour.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
connection.videoOrientation = orientation
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main)
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvImageBuffer: pixelBuffer!)
let typeOfColourBlindness = ColourBlindType(rawValue: "deuteranomaly")
/* Gets colour from a single pixel - currently 0,0 and converts it into the 'colour blind' version */
let captureImage = convert(cmage: cameraImage)
let colour = captureImage.getPixelColour(pos: CGPoint(x: 0, y: 0))
var redval: CGFloat = 0
var greenval: CGFloat = 0
var blueval: CGFloat = 0
var alphaval: CGFloat = 0
_ = colour.getRed(&redval, green: &greenval, blue: &blueval, alpha: &alphaval)
print("Colours are r: \(redval) g: \(greenval) b: \(blueval) a: \(alphaval)")
let filteredColour = CBColourBlindTypes.getModifiedColour(.deuteranomaly, red: Float(redval), green: Float(greenval), blue: Float(blueval))
print(filteredColour)
/* #################################################################################### */
DispatchQueue.main.async {
// placeholder for now
self.filteredImage.image = self.applyFilter(cameraImage: cameraImage, colourBlindness: typeOfColourBlindness!)
}
}
Here is where the x: 0, y: 0
pixel value is converted:
import Foundation
enum ColourBlindType: String {
case deuteranomaly = "deuteranomaly"
case protanopia = "protanopia"
case deuteranopia = "deuteranopia"
case protanomaly = "protanomaly"
}
class CBColourBlindTypes: NSObject {
class func getModifiedColour(_ type: ColourBlindType, red: Float, green: Float, blue: Float) -> Array<Float> {
switch type {
case .deuteranomaly:
return [(red*0.80)+(green*0.20)+(blue*0),
(red*0.25833)+(green*0.74167)+(blue*0),
(red*0)+(green*0.14167)+(blue*0.85833)]
case .protanopia:
return [(red*0.56667)+(green*0.43333)+(blue*0),
(red*0.55833)+(green*0.44167)+(blue*0),
(red*0)+(green*0.24167)+(blue*0.75833)]
case .deuteranopia:
return [(red*0.625)+(green*0.375)+(blue*0),
(red*0.7)+(green*0.3)+(blue*0),
(red*0)+(green*0.3)+(blue*0.7)]
case .protanomaly:
return [(red*0.81667)+(green*0.18333)+(blue*0.0),
(red*0.33333)+(green*0.66667)+(blue*0.0),
(red*0.0)+(green*0.125)+(blue*0.875)]
}
}
}
The placeholder for now
comment refers to the following function:
func applyFilter(cameraImage: CIImage, colourBlindness: ColourBlindType) -> UIImage {
//do stuff with pixels to render new image
/* Placeholder code for shifting the hue */
// Create a place to render the filtered image
let context = CIContext(options: nil)
// Create filter angle
let filterAngle = 207 * Double.pi / 180
// Create a random color to pass to a filter
let randomColor = [kCIInputAngleKey: filterAngle]
// Apply a filter to the image
let filteredImage = cameraImage.applyingFilter("CIHueAdjust", parameters: randomColor)
// Render the filtered image
let renderedImage = context.createCGImage(filteredImage, from: filteredImage.extent)
// Return a UIImage
return UIImage(cgImage: renderedImage!)
}
And here is my extension for retrieving a pixel colour:
extension UIImage {
func getPixelColour(pos: CGPoint) -> UIColor {
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
How can I create a filter for the following colour range for example?
I want to take in the camera input, replace the colours to be of the Deuteranopia
range and display this on the screen, in real time, using Swift.
I am using a UIImageView for the image display.
RT patched kernel anyway to limit make file size?
Im new to compiling my own kernels and im trying follow what guides i can. My problem is that i'm trying to install a patched real time kernel with linux-5.0.21 on a beckhoff CX5130 industrial plc running ubuntu server 18.04.4 lts, with a 32gb harddisk. Unfortunately when I use the sudo make install -j20
i run out of space on the disk, as it is full from when i ran the make -j20
command is there anyway to limit the drivers it fetches or can i make it in a virtual machine on my main pc and then move it to the plc? or is it possible to remove some of the files after the sudo make modules_install -j20
command have been executed?
I have been using this guide https://hungpham2511.github.io/setup/install-rtlinux/
Can't monitoring REQ_OP_DISCARD in struct request
I'm trying to monitoring I/O with eBPF. (Ubuntu 18.04, kernel 5.2, virtual machine)
I attach kprobe to blk_mq_start_request function.
And I monitoring the type of request by checking cmd_flags field in struct request.
As a result, I can see REQ_OP_READ and REQ_OP_WRITE.
Despite trying to delete the file, there's no REQ_OP_DISCARD...
unlink syscall is called when deleting a file, and it seems to lead to blk_mq_start_request function, but I can't find why REQ_OP_DISCARD is not shown.
(Oddly, discard requests are seen in blktrace. It seems that there is a discard i/o at the bio level.)
kernel keeps dying in jupyter notebook
Whenever I start jupyter notebook and create a new python 3 notebook I get an error message saying kernel has died. I have tried deleting and installing ipython, python3.6.5, and jupyter notebook but I still get the error message.
My cmd screen is as follows:-
[I 06:46:36.432 NotebookApp] KernelRestarter: restarting kernel (4/5), new random ports WARNING:root:kernel 0d0442a9-c92f-46e6-acdd-08ca0a18c5f2 restarted Traceback (most recent call last):
File "c:\users\user\appdata\local\programs\python\python36-32\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec)
File "c:\users\user\appdata\local\programs\python\python36-32\lib\runpy.py", line 85, in _run_code exec(code, run_globals)
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\ipykernel_launcher.py", line 15, in from ipykernel import kernelapp as app
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\ipykernel__init__.py", line 2, in from .connect import *
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\ipykernel\connect.py", line 13, in from IPython.core.profiledir import ProfileDir
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\IPython__init__.py", line 55, in from .terminal.embed import embed
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\IPython\terminal\embed.py", line 17, in from IPython.terminal.ipapp import load_default_config
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\IPython\terminal\ipapp.py", line 34, in from IPython.extensions.storemagic import StoreMagics
ModuleNotFoundError: No module named 'IPython.extensions'
[W 06:46:39.450 NotebookApp] KernelRestarter: restart failed
[W 06:46:39.450 NotebookApp] Kernel 0d0442a9-c92f-46e6-acdd-08ca0a18c5f2 died, removing from map.
ERROR:root:kernel 0d0442a9-c92f-46e6-acdd-08ca0a18c5f2 restarted failed! [W 06:46:39.461 NotebookApp] 410 DELETE /api/sessions/67987236-8755-433a-afcb-e052ccbf65b9 (::1): Kernel deleted before session
[W 06:46:39.461 NotebookApp] Kernel deleted before session
[W 06:46:39.461 NotebookApp] 410 DELETE /api/sessions/67987236-8755-433a-afcb-e052ccbf65b9 (::1) 1.00ms
referer=http://localhost:8888/notebooks/Untitled5.ipynb?kernel_name=python3## Heading ##
centos system can't compile lkm
I installed the Linux headers and i create Makefile and try to build it but I got error that I cant find on the internet this is the error
make -C /lib/modules/4.18.0-147.el8.x86_64/build M=/home/daniel modules
make[1]: Entering directory '/usr/src/kernels/4.18.0-147.el8.x86_64'
arch/x86/Makefile:184: *** Compiler lacks asm-goto support.. Stop.
make[1]: Leaving directory '/usr/src/kernels/4.18.0-147.el8.x86_64'
make: *** [Makefile:5: all] Error 2
and this is my Makefile
obj-m := hook.o
export-objs := hook.o
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
how can I fix CONFIG_RETPOLINE=y, but not supported by the compiler [closed]
I'm updating kernel module build scripts to work on more than a single kernel.
I've needed to call
make -C[kernel_dir] -M[module_dir] CONFIG_MODULE_SIG=n
This has been mostly... simple (I've had to add a few extra DEFINE's for some, but otherwise smooth change), but I have one driver, the alphadata admxrc2.ko that is producing a bizarre error:
$ make -C /usr/src/kernels/3.10.0-1062.9.1.el7.x86_64 M=/home/me/drivers/admxrc_drv-4.3.1/monolithic/linux CONFIG_MODULE_SIG=n
make: Entering directory
`/usr/src/kernels/3.10.0-1062.9.1.el7.x86_64'
arch/x86/Makefile:166: *** CONFIG_RETPOLINE=y, but not supported by the compiler. Compiler update recommended.. Stop.
make: Leaving directory `/usr/src/kernels/3.10.0-1062.9.1.el7.x86_64'
It's been baffling, because the compiler is sufficiently recent that it supports CONFIG_RETPOLINE
Linux Kernel Changing Default CPU Scheduler
I am trying to hack the Linux kernel and I am wondering. How can I change the default Linux Process scheduler with another one? And can I just set every processes as a real time process?
What can cause kopen to hang?
I have the following line from an AIX truss I ran:
kopen("path/to/file", O_WRONLY|O_CREAT|O_NSHARE|O_DSYNC|O_LARGEFILE,S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH) (sleeping...)
I'm assuming the sleeping...
means we are waiting here for an extended period of time (after 10 minutes I kill it). What would cause this to hang? The only thing I can think of is that the file is locked but then would it hang or just return some error code?
Xilinx Vivado 2019.2 - Vitis - package_project - ERROR: [Common 17-161] Invalid option value '' specified for 'objects'
I'm using Ubuntu 16.04, Xilinx Vitis (with Vivado 2019.2) in order to produce an xclbin file from synthesis and so.
I've created a Vitis then Vivado "empty application" project with my needs of 4x AXI. I've added my Verilog code files to the Vivado project in the GUI.
I've succeedded to generte RTL kernel via GUI, then producing xclbin file back in Vitis GUI.
In order to do this with updated code files, from command line, with tcl - I've tried to repeat the same tcl commands from the Vivado generating RTL kernel process, and by almost the end of "package_project" run, it fails and it's written: ERROR: [Common 17-161] Invalid option value '' specified for 'objects'. INFO: [Common 17-206] Exiting Vivado at Wed Mar 11 20:22:01 2020...
Then no xo file is being generated, and the whole Vitis xclbin producing can't start.
If I try to do the commands, one by one the same in the GUI tcl console, everything works well.
What do I lack?
Another Issue, that in the beginning of the "package_project" process, it's written that many of my Verilog code files are not being packaged, because they are unreferenced from the top module: WARNING: [IP_Flow 19-3833] Unreferenced file from the top module is not packaged: '/home/ubuntu/workspace/vitis_kernel_wizard_1/vivado_rtl_kernel/vivado_rtl_kernel.srcs/sources_1/ip/rtl_kernel_wizard_1/rtl_kernel_wizard_1.xci'. WARNING: [IP_Flow 19-3833] Unreferenced file from the top module is not packaged: '/vitis_src/axi_infrastructure_v1_1_0.vh'. WARNING: [IP_Flow 19-3833] Unreferenced file from the top module is not packaged: /vitis_src/rtl_kernel_wizard_1.v'. WARNING: [IP_Flow 19-3833] Unreferenced file from the top module is not packaged: ...... WARNING: [IP_Flow 19-3833] Unreferenced file from the top module is not packaged: ...... WARNING: [IP_Flow 19-3833] Unreferenced file from the top module is not packaged: ...... WARNING: [IP_Flow 19-3833] Unreferenced file from the top module is not packaged:
And so many on..
I've set correctly the top file, they all do referenced in the top and down.. Also it works via GUI "generate RTL kernel" process..
What can be the problem?
Note: I've posted also the question in Xilinx Forum.. Still hasn't been answered. Sure to find it here.
Thank you so much for your help.
How time is updated when tick interrupts are disabled
After reading the Linux manual-
Understanding The Linux Kernel
I'm left with unsolved question. The tick interrupt handler is where the kernel keeps the time data structures updated. In the manual there is a very limited explenation regarding recovering lost tick interrupts, for example-
cur_timer points to the timer_hpet object: in this case, the HPET chip is the source of timer interrupts. The mark_offset method checks that no timer interrupt has been lost since the last tick; in this unlikely case, it updates jiffies_64 accordingly.
So, can anyone shed a light on how can the linux kernel keep track of time in case tick interrupts are lost? what does 'accordingly' stands for?
Linux kernel module to list children of a given pid
I am trying to implement a module which takes pid as input parameter and lists all children pids when the module is loaded, i.e. insmod is called. However, I do not know how to achieve this goal. I tried something by looking some tutorials.
So far, I am able to get input, load the module and find task by its pid. What I am not able to do is: I cannot list children pids.
Here is my init function:
int pid_init(void){
if(pid == -1){
printk(KERN_ALERT "No input entered!\n");
return 0;
}
struct pid *pid_struct = find_get_pid(pid);
struct task_struct *parent = pid_task(pid_struct, PIDTYPE_PID);
if(parent == NULL){
printk(KERN_ALERT "No process found!\n");
return 0;
}
struct task_struct *task;
struct list_head *list;
printk(KERN_INFO "%d\n", parent->pid);
list_for_each(list, &parent->children) {
task = list_entry(list, struct task_struct, sibling);
printk(KERN_INFO "%d\n", task->pid);
}
return 0;
}
I tried in following pids: (output of pstree -p
)
├─udisksd(783)─┬─{udisksd}(792)
├─{udisksd}(795)
├─{udisksd}(898)
└─{udisksd}(920)
When I call: sudo insmod module.ko pid=783
Output is: 783
which is printed by me, so it means list_for_each
is not called.
However, when I call sleep 100 &
three times and execute sudo insmod module.ko
with pid=<bash>
Then I can list pids of sleep 100 &
calls.
Finally, I am using Ubuntu
in case you need to know.
Does a kernel-level thread still have a process running with it?
So if i understand correctly, a user-level thread has a thread control block and a process running the actual thread. That process which runs the thread is what is actually scheduled to run. That process is then what gives the OS access to the OS's resources.
In the case of kernel-level threads, is there still a process with process control block running along side it to provide access to the computers resources?