Quantcast
Channel: Active questions tagged kernel - Stack Overflow
Viewing all 6341 articles
Browse latest View live

How hyperthreading is useful in kvm based guest VM?

$
0
0

We are creating virtual os threads inside guest os but the actual physical core is on the host machine. Will hyperthread = true in guest VM config make any difference?

I can visualize how hyperthreading works on the host as it has a physical core, but VM has virtualized os threads so it hard for me to visualize hyperthreading inside a VM.

Can someone please explain to me how it will be useful.


PintOS user program does not print

$
0
0

Im trying to follow this guide, the section about user programs. Apparently, Im able to succesfully pass a program from ubuntu to the Pintos filesystem, because I can see the file by running pintos -q ls

Output of pintos -q ls

When runnning this:

pintos-mkdisk filesys.dsk --filesys-size=2
pintos -f -q
pintos -p ../../examples/echo -a echo -- -q
pintos -q run 'echo x'

I only get this, and no printing:

Running the echo program inside pintos

Any idea of why not seeing the output? I've also tried with the "hellopintos" file, which is simply a hello world like this:

#include <stdio.h>
#include <syscall.h>

    void main(){
       printf("Hello pintos\n");

    }

How to solve "Kernel panic - not syncing - Attempted to kill init"

$
0
0

i install Android x86 9.0 in old pc notebook.

I have a problem. when android starts, it reboots while loading. I then activated in debug mode, and the exhibition shows Kernel panic. Attachment screen

enter image description here

Can help me? Thank you

memory bound kernel and compute bound kernel in GPUs

$
0
0

What is "memory bound kernel and compute bound kernel in GPUs"?

Is this related to performance of GPUs?

Avscan Windows Minifilter Driver Can't run with 32-bit application

$
0
0

I built avscan minifilter driver

avscan/user/avscan.vcxproj : 32-bit
avscan/filter/avscan.vcxproj : 64-bit

User app can't connect to driver.
Can anyone help me how to support 32-bit application in 64-Bit Driver???

My OS: Windows 10, 64-bit

I saw this one but don't know how to do.

The error is "Failed to send message SendMessageToCreateSection to the minifilter" in userscan.c

    hr = FilterSendMessage( Context->ConnectionPort,
                            &commandMessage,
                            sizeof( COMMAND_MESSAGE ),
                            &sectionHandle,
                            sizeof( HANDLE ),
                            &bytesReturned );

    if (FAILED(hr)) {

        fprintf(stderr,
          "[UserScanHandleStartScanMsg]: Failed to send message SendMessageToCreateSection to the minifilter.\n");
        DisplayError(hr);
        return hr;
    }

Thanks

Cloning the testing branch on git

$
0
0

I have been told that development takes place in the "testing" branch of some XYZ git tree:

https://git.develop.org/def/pqr/abc/xyz.git/

The above link is an imaginary/hypothetical git link related to Linux kernel.

So how do I clone the testing branch on my local laptop? Will the following command suffice?:

git clone https://git.develop.org/def/pqr/abc/xyz.git/

What if it clones another branch ? for example master. Do i need to change to testing branch explicitly using git checkout ...

OR is there any other command to clone the testing branch directly ?

Is it possible to get kernel version from EFL image file without disassemble or using grep or strings?

$
0
0

I have a vmlinuz EFL image file. I need to get the kernel version from the image file without disassembling it. Is it possible to get kerenel version from offsets of that compressed image file? The file is ELF 64-bit MSB executable, statically linked, not stripped.

Mapping IO memory directly to user space

$
0
0

System is running on SOC system running linux. I use a section on the DDR for I/O. This section on the DDR is hidden from the OS, The OS don't "see" this part of the DDR, it's think it has less memory then the actual size.

Currently my driver maps this section to kernel using ioremap, and the user space application read/write data to this section using the read/write function I implemented on the driver.

I want to map this part of memory directly to user space using mmap, to save the copies between user space to kernel and by that improve performance.

I'm not sure how to do it. from what I've read i need to use the remap_pfn_range function. I wrote the mmap function as follow:

vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
pfn = phy_add >> PAGE_SHIFT;
vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;

ret = remap_pfn_range(vma, vma->vm_start, pfn, mem_size, vma->vm_page_prot);
if (ret < 0) {
    pr_err("could not map the address area\n");
    return -EIO;
}

return ret;

in ifconfig.h's ifadder struct on OS X, what can I cast ifa_data to?

$
0
0

I'm getting my ifadder structs which look like this:

struct ifaddrs {
    struct ifaddrs  *ifa_next;
    char        *ifa_name;
    unsigned int     ifa_flags;
    struct sockaddr *ifa_addr;
    struct sockaddr *ifa_netmask;
    struct sockaddr *ifa_dstaddr;
    void        *ifa_data;
};

What can I cast the void *ifa_data too? I'd like to take a look inside. I'm using C/C++ (compiling with C++ compiler). I've seen people cast it to rtnl_link_stats but that struct doesn't seem to be a part of OS X. Any ideas?

Also bonus question here, whenever I access the sockaddr's in my ifaddrs struct, the sockaddr's sa_data member is always blank/empty. Any ideas why?

Thanks!

Linux Kernel: Error 'Implicit declaration of function 'getuid' [duplicate]

$
0
0

I am implementing a custom syscall in Linux kernel and need to check if a process is running as superuser. I am using getuid() function like this:

#include <linux/unistd.h>

if (getuid() == 0) {
...
}

However, when I compile I'm getting the error error: implicit declaration of function 'getuid' [-Werror=implicit-function-declaration].

I thought this would only be an issue if it can't find the header file, but I am including it, so I'm not sure where to go. Any ideas?

What can I do with this compilation error?

$
0
0

i have this error:

arch/arm64/Makefile:47: *** CROSS_COMPILE_ARM32 not defined or empty, the compat vDSO will not be built.

i have tried to export it like this:

export CROSS_COMPILE_ARM32=/home/avmiz/kernek_dev/arm-linux-androideabi-4.9/bin/arm-linux-android-

but it still give me the same error, and it's strange because my phone is aarch64

FIQ interrupt configuration in RPi

$
0
0

I have tried to write a driver (in the kernel) that interrupts every time the data received from the ADC is ready to be read (Analog to digital conversion is done). The problem is that sometimes the interrupts are not triggered, I guess because it interferes with other interrupts. I want to configure the ELC (End of Last Conversion) interrupt request to be FIQ to solve this problem. I searched for days the internet but couldn't find a solution to how to configure an interrupt to be FIQ.

This post is the closest topic I have found that can help me find out how to configure an interrupt to be FIQ - BCM2708 (RPi) Rasbpian FIQ not triggered.

Thanks!

i have already mentioned kernel_size in my code

$
0
0
def unet(pretrained_weights = None,input_size = (256,256,1)):
    model = Sequential()
    inputs = Input(input_size)


conv1 = DeformableConvLayer(Conv2D)(filter=6,kernel_size=3,strides=1, num_deformable_group=1, 
activation='relu')(inputs)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
x1 = BatchNormalization()(conv1)
b1 = Dropout(rate=0.3)(x1)

error: init() missing 1 required positional argument: 'kernel_size'

Jupyter - failed to start the kernel

$
0
0

i am suddenly getting the following error when trying to launch a jupyter-notebook: "Failed to start the kernel"- Unhandled error. Any ideas how to fix this?

Traceback (most recent call last):
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\tornado\web.py", line 1699, in _execute
    result = await result
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\tornado\gen.py", line 742, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\notebook\services\sessions\handlers.py", line 72, in post
    type=mtype))
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\tornado\gen.py", line 735, in run
    value = future.result()
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\tornado\gen.py", line 742, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 88, in create_session
    kernel_id = yield self.start_kernel_for_session(session_id, path, name, type, kernel_name)
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\tornado\gen.py", line 735, in run
    value = future.result()
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\tornado\gen.py", line 742, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 101, in start_kernel_for_session
    self.kernel_manager.start_kernel(path=kernel_path, kernel_name=kernel_name)
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\tornado\gen.py", line 735, in run
    value = future.result()
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
    yielded = next(result)
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\notebook\services\kernels\kernelmanager.py", line 168, in start_kernel
    super(MappingKernelManager, self).start_kernel(**kwargs)
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\jupyter_client\multikernelmanager.py", line 110, in start_kernel
    km.start_kernel(**kwargs)
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\jupyter_client\manager.py", line 259, in start_kernel
    **kw)
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\jupyter_client\manager.py", line 204, in _launch_kernel
    return launch_kernel(kernel_cmd, **kw)
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\site-packages\jupyter_client\launcher.py", line 138, in launch_kernel
    proc = Popen(cmd, **kwargs)
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\subprocess.py", line 775, in __init__
    restore_signals, start_new_session)
  File "C:\Users\USER\AppData\Local\Continuum\anaconda3\lib\subprocess.py", line 1178, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 2] Das System kann die angegebene Datei nicht finden

AF_XDP: No packets for socket with queue-id 0 even though every packet is redirect

$
0
0

I am based on this tutorial: https://github.com/xdp-project/xdp-tutorial/tree/master/advanced03-AF_XDP

I create a socket with Queue-ID 0 in userspace. In my kernel af-xdp program I filter for UDP-packets and redirect them to the userspace socket via a xskmap.

Because I obviously want the userspace-program to receive packets, I redirect the packets in the kernel program to index 0:

int index = 0;
if (bpf_map_lookup_elem(&xsks_map, &index)) {
    return bpf_redirect_map(&xsks_map, index, 0);
} else {
    bpf_printk("Didn't find connected socket for index %d!\n", index);
}

I don't get the error message Didn't find connected socket for index 0! via sudo cat /sys/kernel/debug/tracing/trace_pipe but I don't receive any packets either in userspace!

If I just continue to run the program and simultaneously add an ethtool-rule like this:

sudo ethtool -N <eth> flow-type udp4 dst-ip <ip> action 0

my userspace program suddenly starts to receive packets and the error message goes away.

I thought that the kernel program would receive every packet sent to that interface but somehow that's not the case. What did I do wrong?


What is a kernel in Jupyter Notebook and how it is different/similar to actual kernel (related to Operating System)?

Why when i compile my kernel it gives errors [closed]

$
0
0

Im modifying the kernel for a Samsung Galaxy Core Prime SM-G316F to add external wifi dongle support and HID compatibility And when i enable USB functions configurable through configfs it gives errors while compiling... Im new to this world, so can you help me to understand the errors?

https://hastebin.com/yakajehinu.cs

Problems with Spyder kernel

$
0
0

I recently started using Spyder, but after trying to install the saseg_runner the kernel stopped working. I uninstalled the whole package and installed it back and it still won't work. When it starts, I get the following error:

Traceback (most recent call last):
File "C:\Users\vttuppb.DM010CTO\AppData\Local\Continuum\anaconda3\lib\site‑packages\spyder\plugins\ipythonconsole.py", line 1572, in create_kernel_manager_and_kernel_client
kernel_manager.start_kernel(stderr=stderr_handle)
File "C:\Users\vttuppb.DM010CTO\AppData\Local\Continuum\anaconda3\lib\site‑packages\jupyter_client\manager.py", line 240, in start_kernel
self.write_connection_file()
File "C:\Users\vttuppb.DM010CTO\AppData\Local\Continuum\anaconda3\lib\site‑packages\jupyter_client\connect.py", line 547, in write_connection_file
kernel_name=self.kernel_name
File "C:\Users\vttuppb.DM010CTO\AppData\Local\Continuum\anaconda3\lib\site‑packages\jupyter_client\connect.py", line 212, in write_connection_file
with secure_write(fname) as f: 
File "C:\Users\vttuppb.DM010CTO\AppData\Local\Continuum\anaconda3\lib\contextlib.py", line 112, in __enter__
return next(self.gen)
File "C:\Users\vttuppb.DM010CTO\AppData\Local\Continuum\anaconda3\lib\site‑packages\jupyter_client\connect.py", line 102, in secure_write
with os.fdopen(os.open(fname, open_flag, 0o600), mode) as f:
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\vttuppb.DM010CTO\\AppData\\Roaming\\jupyter\\runtime\\kernel󈛨da066a0f3e.json'

I've tried all the possible conda updates using the prompt and I can't find a solution. Can anyone please help?

Many separation line using RBG kernel in SVM

$
0
0

Below is my code, it take a range of a number, creates a new column label that contains either -1 or 1.

In case the number is higher than 14000 , we label it with -1 (outlier) In case the number is lower than 14000 , we label it with 1 (normal)

## Here I just import all the libraries and import the column with my dataset 
## Yes, I am trying to find anomalies using only the data from one column

df['label'] = [-1 if x >= 14000 else 1 for x in df['data_numbers']]  #What I explained above

data = df.drop('label',axis=1)                         
target = df['label']
outliers = df[df['label']==-1]

outliers = outliers.drop('label',axis=1)

from sklearn.model_selection import train_test_split
train_data, test_data, train_target, test_target = train_test_split(data, target, train_size = 0.8)
train_data.shape

nu = outliers.shape[0] / target.shape[0]
print("nu", nu)

model = svm.OneClassSVM(nu=nu, kernel='rbf', gamma=0.00005) 
model.fit(train_data)

from sklearn import metrics
preds = model.predict(train_data)
targs = train_target 
print("accuracy: ", metrics.accuracy_score(targs, preds))
print("precision: ", metrics.precision_score(targs, preds)) 
print("recall: ", metrics.recall_score(targs, preds))
print("f1: ", metrics.f1_score(targs, preds))
print("area under curve (auc): ", metrics.roc_auc_score(targs, preds))
train_preds = preds

preds = model.predict(test_data)
targs = test_target 
print("accuracy: ", metrics.accuracy_score(targs, preds))
print("precision: ", metrics.precision_score(targs, preds)) 
print("recall: ", metrics.recall_score(targs, preds))
print("f1: ", metrics.f1_score(targs, preds))
print("area under curve (auc): ", metrics.roc_auc_score(targs, preds))
test_preds = preds


from mlxtend.plotting import plot_decision_regions                                 # as rbf svm is used hence lot's of  decision boundaries are drawn unlike one in linear SVM 
# the top one central points with blue quares are outlietrs while at the bottom they are orangy triangles(normal values)
plot_decision_regions(np.array(train_data), np.array(train_target), model)
plt.show()

enter image description here

My graph seems to be having so many sepearation lines, I was thinking I would only be getting one that differentiates between the outliers and the normal data.

Why my Ubuntu 18.04 does not recognize (anymore) my 2nd screen? [closed]

$
0
0

I'm on a problem which I don't understand at all. I have a portable computer, and a external screen connected via HDMI to the computer. For more than 4 months, it worked PERFECTLY. I have a dual boot. Ubuntu 18.04 and Windows 10. Both were recognizing the external screen.

The 1st time I used the external screen, I just plugged in the HDMI cable to my computer's HDMI port and it worked. The external screen was recognized.

But last monday (because of bad weather ?) Ubuntu decided the external screen was not worthy of being recognized anymore. Is Ubuntu on its period ? Did an unstable release of the kernel screwed it up ?

As you can see, I'm kind of frustrated, as I don't know at all what is happening. Anybody had a similar problem ?

EDIT : I've run the command line dmesg twice. Just before running it for the 1st time, I plugged in the HDMI cable to the computer. The computer did not detect any HDMI stuff. The 2nd time, I deplugged the HDMI just before and same again, the computer did not detect any HDMI stuff.

Then, I went to the parameters, in the external devices section. The 2nd screen does not appear. All of this does not make any sense.

Viewing all 6341 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>