Quantcast
Channel: Active questions tagged kernel - Stack Overflow
Viewing all 6382 articles
Browse latest View live

How capture Etw in kernelmode?

$
0
0

I'm trying to capture some syscalls using Etw in my driver, to be more precise I need capture NtWriteVirtualMemory and NtReadVirtualMemory usermode calls, i tried using the provider: Microsoft Windows Threat Intelligence, I used PerfView to dump the Threat Inteligence xml for Windows 10 1909 and used message compiler to compile the manifest(exactly what microsoft said to do) and checked Microsoft Etw sample code, but for some reason my callback is called only once(when EtwRegister is called in my DriverEntry), to register my callback i just called EtwRegister, what i should do for my callback be called? I'm using Microsoft sample code with a header file that was generated by mc.exe(message compiler)


Unexpected result when using container_of macro (Linux kernel)

$
0
0

I have a problem with using of container_of macro in the Linux kernel. I have the following code

#define container_of(ptr, type, member) ({ \
        const typeof( ((type *)0)->member) *__mptr = (ptr); \
        (type *)( (char *)__mptr - offsetof(type, member) );})


struct list_head
{
    struct list_head *prev;
    struct list_head *next;
};


struct fox
{
    unsigned long tail_length;
    unsigned long weight;
    unsigned int is_fantastic;

    /*Make this struct a node of the linked list*/
    struct list_head list;
};

I want to make fox structure a node of the linked list.

int main(void)
{
    struct list_head node_first = {.prev=NULL, .next=NULL};
    struct fox first_f = {.tail_length=3, .weight=4, .is_fantastic=0, .list=node_first};

    struct fox *second_f; 
    second_f = container_of(&node_first, struct fox, list);
    printf("%lu\n", second_f->tail_length);
    return 0;
}

I expected that I will see 3 in the terminal, since second_l points to the firstf_f structure, but I have 140250641491552 (some "random" value from the memory, as a think).

Monitoring Process Syscalls in Live Environment

$
0
0

I've been working on a project for a little while, and the first step is building a library of syscall traces for processes. Essentially, what I'm trying to do is have system wherein every time a process requests an OS service via a syscall, relevant information (calling process, time, syscall name) of the event get logged to a file.

Theoretically, this sounds like a simple enough thing to do, however, implementing such is becoming more of a pain as time goes on. I suppose the main that's causing issues for me is a general lack of knowing where to start implementation.

Initially, I thought that this could all be handled be adding a few lines of code to the kernel entry point, but after digging through entry_64.S for a little while, I came to the conclusion that there must be an easier way. The next idea I had was to overwrite all the services pointed to by sys_call_table with my own service that did logging then called the original service. But, turns out, there are some difficulties to this method with linux kernel 5.4.18 due to sys_call_table no longer being exported. And, even when recompiling the kernel so that sys_call_table is exported, the table is in a memory protected location. Lastly, I've been experimenting with auditd. Specifically, I followed this link but it doesn't seem to be working (when I executed kill command there was is only a corresponding result in ausearch about 50% of time based on timestamps).

I'm getting a little burned out by all these dead-ends, and am really hoping to finally have this first stage in my project up and running. Does anyone have any pointers as to what I should try?

Solution: BPFTrace was exactly what I was looking for.

How to build and run android kernel with kasan on a real device

$
0
0

How to build a kernel with kernel address sanitizer (kasan) enabled and run it on a real device? There is an instruction for Pixel devices https://source.android.com/devices/tech/debug/kasan-kcov. However, it doesn't help with other devices. For example, it's not obvious how to adjusting board parameters for a device. Is there any logic in increasing these addresses? How to calculate new values?

For example, I was able to build a samsung kernel with kasan+kcov. But the kernel doesn't boot. Just a black screen with no obvious clues from the bootloader in logs.

How to show kernel_task by ps on Mac?

$
0
0

When I use ps aux or sudo ps aux on Mac, I can see processes owned by root. But I don't see the kernel_task process, which can be seen on sudo htop. Does anybody know how to show kernel_task by ps?

PyODBC SQL Anywhere 17 Conect to Sybase Kernel Dies

$
0
0

I am Working on Ubuntu 18.04 when I am using Pyodbc connection with SQL Anywhere 17 Driver to connect to a Sybase DB, while trying to establish connection my Jupyter notebook Dies. The expectation is, I should be able to run this code in Ubunt and connect to a Sybase DB.

I can connect and run query from Windows without problems(using DSN).

I have been working with other driver and SQL Server, MySQL and MariaDB and I have not encountered any problems. I believe connection to Sybase database needs SQLANYWHERE DRVIER.

If Someone knows how get the connection string which is passed from pyodbc to the server when I use a DSN?(maybe this could give me an idea to know what i'm doing wrong).

Some advice?

Code run in windows without problems

import pyodbc
import pandas as pd

cnxn = pyodbc.connect("DSN=RevDSN")
print(cnxn)
data = pd.DataFrame(pd.read_sql_query(query, cnxn))
cnxn.close()

OS X kernel lock virtual address space into physical memory

$
0
0

To allocate memory I do like that:

uint64_t _addr = 0x00;

kern_return_t err = mach_vm_allocate(mach_task_self(), &_addr, size, VM_FLAGS_ANYWHERE);
    if (err != KERN_SUCCESS) {
        printf("failed to allocate %s\n", mach_error_string(err));

    }

But can someone please show me how to prevent that memory from being paged to the swap area? In Windows there is VirtualLock.

What happens when you invoke Kernel.puts() inside Kernel.require() definition?

$
0
0

I was trying to build the program code dependence tree, so for that I started with overwriting the Kernel.require() statement to output me data required for this. When just using the Kernel.p() method for output, everything is fine, so this code will give me desired data:

def require arg
  super arg
  p "including '#{arg}' in '#{caller_locations(1).first.path}'"
end

But I have noticed that when Kernel.puts() is used in stead, it seem to work just fine for some 'require' statements, but for other I am getting CodeRay::Encoders::PluginHost uninitialized constant error:

Traceback (most recent call last):
    28: from ./thief:9:in `<main>'
    27: from ./thief:9:in `require_relative'
    26: from /home/siery/devel/eco-sim/lib/thief.rb:12:in `<top (required)>'
    25: from /home/siery/devel/eco-sim/lib/thief.rb:13:in `<module:Thief>'
    24: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    23: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    22: from /home/siery/devel/eco-sim/lib/engine.rb:2:in `<top (required)>'
    21: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    20: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    19: from /home/siery/devel/eco-sim/lib/screen_area.rb:1:in `<top (required)>'
    18: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    17: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    16: from /home/siery/devel/eco-sim/lib/map.rb:2:in `<top (required)>'
    15: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    14: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    13: from /home/siery/devel/eco-sim/lib/debug.rb:1:in `<top (required)>'
    12: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    11: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    10: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/pry-0.11.3/lib/pry.rb:152:in `<top (required)>'
     9: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
     8: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
     7: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/pry-0.11.3/lib/pry/color_printer.rb:2:in `<top (required)>'
     6: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/pry-0.11.3/lib/pry/color_printer.rb:3:in `<class:Pry>'
     5: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/pry-0.11.3/lib/pry/color_printer.rb:5:in `<class:ColorPrinter>'
     4: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
     3: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
     2: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/coderay-1.1.2/lib/coderay/encoders.rb:1:in `<top (required)>'
     1: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/coderay-1.1.2/lib/coderay/encoders.rb:10:in `<module:CodeRay>'
/home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/coderay-1.1.2/lib/coderay/encoders.rb:12:in `<module:Encoders>': uninitialized constant CodeRay::Encoders::PluginHost (NameError)
Did you mean?  CodeRay::PluginHos

Is that because Kernel::require definition is before some Kernel::puts requirements?


Format kernel task_struct->start_time into hour/min/sec form

$
0
0

I am trying to print out pid and start time of processes in linux kernel module. However, regular start_time field is a long and needed to be formatted. I am having trouble with formatting.

So far I have:

list_for_each(list, &parent->children) {
        task = list_entry(list, struct task_struct, sibling);
        time64_t start = task->start_time;
        struct tm time;
        time64_to_tm(start, 0,  &time);

        printk(KERN_INFO "%d-(%d:%d:%d)", task->pid, time.tm_hour, time.tm_min,  time.tm_sec);
}

In this format, I call three times sleep 100 & and loaded the module with bash's pid as parameter.

Output is:

[ 9608.211443] 6228 // bash pid
[ 9608.211444] 23574-(23:39:49)
[ 9608.211445] 23576-(20:55:25)
[ 9608.211446] 23577-(11:2:19)

Here, all pids are correct, there is no issue with them. My problem is about formatting the time. Obviously these are not correct times.

My local time when I run these commands: 17.55

And I am using ubuntu in case you need to know.

Assembly kernel crashes when executing code after address 0x6000 [closed]

$
0
0

In my kernel I have virtual memory enabled (0x0 is identity mapped to 0xC0000000).

Everything is running fine until the code execution reaches address 0x6000 (or around that address) (0xC0000000 + 0x6000 virtual)

At this point, the kernel crashes for some reason.

Virtual memory shouldn't be a problem since each page table maps way more than 0x6000. I checked and the code is not overwritten at address 0x6000.

I set up my GDT so that it opens up all 4 GB of available memory:

gdt_start:

        gdt_null:
            dd 0x0 ;
            dd 0x0

        gdt_code:
            dw 0xffff
            dw 0x0
            db 0x0
            db 10011010b ; 1 st flags , type flags
            db 11001111b ; 2 nd flags , Limit ( bits 16 -19)
            db 0x0

        gdt_data:
            dw 0xffff
            dw 0x0
            db 0x0
            db 10010010b ; 1 st flags , type flags
            db 11001111b ; 2 nd flags , Limit ( bits 16 -19)
            db 0x0

        gdt_end:

        gdt_descriptor:
            dw gdt_end - gdt_start - 1
            dd gdt_start

Assembly page fault handler cannot be called due to invalid stack pointer

$
0
0

When my page fault handler interrupt gets called (it is supposed to hang the system), there are some variables pushed to the stack before it is called. I have virtual memory enabled and when I set up an invalid stack pointer (esp) and the int14 handler gets called it immediately causes another page fault and so on and so on. How should I resolve this situation?

My int14 code:

isr14:
    ; interrupt handler for isr14
    jmp $
    iretd

The code that causes it to break:

mov esp, 0x1000 ; 0x1000 is not mapped in the VM directory
push dword 'A'
jmp $

Section of my IDT table:

irq14:
    dw isr14
    dw 0x0008
    db 0x00
    db 10101110b
    dw 0x0000

irq15:
........

modinfo: ERROR: could not get modinfo from 'hello_1': Exec format error

$
0
0

trying to insert a simple module to the kernel that prints hello world. but on inserting the module using insmod it gives the following error.

insmod: ERROR: could not insert module ./hello-1.ko: Invalid module format

the linux kernel module programming guide says it is due to vermagic difference. so , i tried using modinfo to see the vermagic of the module hello-1.ko and got the stated error.

I searched a lot but i am not getting any idea how to proceed further.

i am using linux mint tricia 19.3

kernel :5.0.0-32-generic

module -

#include <linux/module.h>       
#include <linux/kernel.h>       
int init_module(void)
{        
    printk(KERN_INFO "Hello world 1.\n");        
    return 0;
}
void cleanup_module(void)
{
        printk(KERN_INFO "Goodbye world 1.\n");
}

its makefile

obj−m += hello−1.o
all: 
    make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
    make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean

Where can I browse the sourcecode for libc online (like doxygen)

$
0
0

Sometimes I want to look up the implementations of functions in the stdlib, I've downloaded the sourcecode, but it's quite messy.

Just greping is not really suitable because of the many hits.

Does anyone know a webpage doxygen style that has the documentation.

The same goes for the linux kernel.

Thanks

Failed to start spyder kernel after setting Python interpreter to new environment

$
0
0

I made a new environment where I installed geopandas, xarray, and regionmask. When I tried to switch environment using the modular approach (by installing spyder-kernels in the new environment and indicating the new path to the environment in the Python interpreter, as indicated here: https://github.com/spyder-ide/spyder/wiki/Working-with-packages-and-environments-in-Spyder#the-modular-approach), an error persists while starting the kernel:

Traceback (most recent call last):
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\runpy.py", line 193, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\spyder_kernels\console\__main__.py", line 11, in 
start.main()
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\spyder_kernels\console\start.py", line 288, in main
import_spydercustomize()
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\spyder_kernels\console\start.py", line 39, in import_spydercustomize
import spydercustomize
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\spyder_kernels\customize\spydercustomize.py", line 27, in 
from spyder_kernels.comms.frontendcomm import CommError, frontend_request
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\spyder_kernels\comms\frontendcomm.py", line 17, in 
from jupyter_client.localinterfaces import localhost
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\jupyter_client\__init__.py", line 4, in 
from .connect import *
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\jupyter_client\connect.py", line 21, in 
import zmq
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\zmq\__init__.py", line 47, in 
from zmq import backend
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\zmq\backend\__init__.py", line 40, in 
reraise(*exc_info)
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise
raise value
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\zmq\backend\__init__.py", line 27, in 
_ns = select_backend(first)
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\zmq\backend\select.py", line 28, in select_backend
mod = __import__(name, fromlist=public_api)
File "C:\Users\Justin\anaconda3\envs\geo_env\lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in 
from . import (constants, error, message, context,
ImportError: DLL load failed while importing error: The specified module could not be found.

How can I debug this?

How does KVM/QEMU and guest OS handles page fault

$
0
0

For example, I have a host OS (say, Ubuntu) with KVM enabled. I start a virtual machine with QEMU to run a guest OS (say, CentOS). It is said that to the host OS, this VM is just a process. So in the host's point of view, it handles page fault as usual (e.g., allocate page frame as needed, swap pages based on active/inactive lists if necessary).

Here is the question and my understanding. Within the guest OS, as it's still a full-fledged OS, I assume it still has all mechanisms handling virtual memory. It sees some virtualized physical memory provided by QEMU. By virtualized physical memory I mean the guest OS doesn't know it is in a VM, and still works as it would on a real physical machine, but what it has are indeed an abstraction given by QEMU. So even if a page frame is allocated to it, if that's not in guest's page table, the guest OS will still trigger a page fault and then map some page to the frame. What's worse, there may be a double page fault, where the guest first allocate some page frames upon page fault, which triggers page fault at host OS.

However, I also heard something like shallow (or shadow) page table which seems could optimize this unnecessary double page fault and double page table issue. I also looked at some other kernel implementation, specifically unikernels, e.g., OSv, IncludeOS, etc. I didn't find anything related to page fault and page table mechanisms. I did see some symbols like page_fault_handler but not as huge as what I saw in Linux kernel code. It seems memory management is not a big deal in these unikernel implementations. So I assume QEMU/KVM and some Intel's virtualization technologies have already handled that.

Any ideas in this topic? Or if you have some good references/papers/resources to this problem, or some hints would be very helpful.


Incorrect SHA digest for very long messages when using kernel crypto

$
0
0

I'm trying to compute the hash digest using SHA-1 using kernel crypto. I get the right results for messages under 4096 bytes. Anything beyond that is incorrect. The test itself does not fail but gives incorrect responses.

Comparison of kernel crypto versus OpenSSL

Left side shows output from OpenSSL and right side for kernel crypto. The reason I know left side (OpenSSL) gives right results is because the results were validated by a lab. In the image, the message length is 32984 which is 4123 bytes.

My code is shown below:

int numpages = (msgLength / PAGE_SIZE);
if ( msgLength % PAGE_SIZE )
    numpages++; // Overflow

struct crypto_shash *tfm = NULL;
struct shash_desc *desc = NULL;
unsigned char *page_msg[numpages];
unsigned char *page_hash;
int i, msg_remaining;

tfm = crypto_alloc_shash("sha1", 0, 0);

desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(tfm), GFP_KERNEL);
    if (!desc) {
        LOG_ERROR("Unable to allocate struct shash_desc\n");
        goto free_return;
    }

desc->tfm = tfm;
desc->flags = 0;

// temporary storage for hash in block-memory
page_hash = (unsigned char*)get_zeroed_page(GFP_KERNEL);

// setup message in block-memory
i=0;
msg_remaining = msgLength;
while ( msg_remaining > 0 ) {
    page_msg[i] = (unsigned char*)get_zeroed_page(GFP_KERNEL);
    memcpy(page_msg[i], msg + (i * PAGE_SIZE), (msg_remaining > PAGE_SIZE) ? PAGE_SIZE : msg_remaining);
    i++;
    msg_remaining -= PAGE_SIZE;
}

// do the operation
crypto_shash_init(desc);
if ( 0 != crypto_shash_digest(desc, page_msg[0], msgLength, page_hash) ) {
    LOG_ERROR("Bad Digest Returned\n");
    goto free_return;
}

free_return:
    crypto_free_shash(tfm);
    free_page((unsigned long)page_hash);
    for ( i = 0; i < numpages; i++ ) {
        free_page((unsigned long)page_msg[i]);
    }

PintOS user program does not print

$
0
0

Im trying to follow this guide, the section about user programs. Apparently, Im able to succesfully pass a program from ubuntu to the Pintos filesystem, because I can see the file by running pintos -q ls

Output of pintos -q ls

When runnning this:

pintos-mkdisk filesys.dsk --filesys-size=2
pintos -f -q
pintos -p ../../examples/echo -a echo -- -q
pintos -q run 'echo x'

I only get this, and no printing:

Running the echo program inside pintos

Any idea of why not seeing the output? I've also tried with the "hellopintos" file, which is simply a hello world like this:

#include <stdio.h>
#include <syscall.h>

    void main(){
       printf("Hello pintos\n");

    }

File system driver

$
0
0

How to differentiate between the request raised to the kernel, when a file is created manually inside a directory and when a file is created Using CreateFile() function programmatically.

I want to create a directory where manual file creation is not allowed but programmatically a file can be created in a directory.Please help me with this

Why so many NETLINK rtm_newlink messages when starting docker container

$
0
0

I am working on a packet sniffer app and have it setup so that there is one capture thread per interface (as opposed to one thread capturing on 'all'). It works fine but the code needs to listen to changes to the list of interfaces so that it can manage the capture threads.

I wrote a small function that does what I want using the netlink api using RTMGRP_LINK in the nl_groups field and specifically acting upon RTM_DELLINK and RTM_NEWLINK message types. It works as expected but I don't really understand the logic when a docker container starts in terms of the messages that are being received from the kernel.

For example, running docker run -it centos:7 /bin/bash creates the following flow of messages:

RTM_NEWLINK NAME: vethaec0b80 MAC: a2:a1:2c:48:72:f4
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: docker0 MAC: 02:42:0d:8f:a2:f9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: docker0 MAC: 02:42:0d:8f:a2:f9
RTM_DELLINK NAME: vethaec0b80 MAC: a2:a1:2c:48:72:f4
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: docker0 MAC: 02:42:0d:8f:a2:f9

and exiting the container generates:

RTM_NEWLINK NAME: vethaec0b80 MAC: 02:42:ac:11:00:02
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_DELLINK NAME: vethaec0b80 MAC: 02:42:ac:11:00:02
RTM_NEWLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_DELLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: docker0 MAC: 02:42:0d:8f:a2:f9
RTM_DELLINK NAME: vethb4ca0db MAC: fe:e8:79:7a:ce:d9
RTM_NEWLINK NAME: docker0 MAC: 02:42:0d:8f:a2:f9

As you can see from starting the container, a RTM_NEWLINK message is received for 2 virtual interfaces (one of which is destroyed) and the docker bridge which was already up and running. Terminating the container has a similar situation; 2 virtual interfaces created (both destroyed after RTM_NEWLINK called on them again).

Questions

1) Why are 2 virtual interfaces created and only one kept when starting the container?

2) Why is the RTM_NEWLINK messages sent so many times?

3) On terminating the container, why does it send so many RTM_NEWLINK messages before sending the RTM_DELLINK message?

Compiling AOSP Kernel with KASAN

$
0
0

I'm struggling to compile the Linux kernel for usage in AOSP with KASAN & KCOV enabled. I then intend to flash it to a Pixel 2 XL (taimen) and use Syzkaller to fuzz it.

This is what I did:

1. Build unmodified kernel (works)

My reference: https://source.android.com/setup/build/building-kernels

  • Determine branch... android-msm-wahoo-4.4-pie-qpr2
  • $ repo init -u https://android.googlesource.com/kernel/manifest -b android-msm-wahoo-4.4-pie-qpr2
  • $ repo sync -j8 -c
  • $ build/build.sh -j8
  • Connect phone via USB
  • $ adb reboot bootloader
  • $ fastboot boot out/android-msm-wahoo-4.4/dist/Image.lz4-dtb (Works fine)

2. Build kernel with KASAN & KCOV (fails)

POST_DEFCONFIG_CMDS="check_defconfig && update_debug_config"
function update_debug_config() {
    ${KERNEL_DIR}/scripts/config --file ${OUT_DIR}/.config \
         -d CONFIG_KERNEL_LZ4 \
         -e CONFIG_KASAN \
         -e CONFIG_KASAN_INLINE \
         -e CONFIG_KCOV \
         -e CONFIG_SLUB \
         -e CONFIG_SLUB_DEBUG \
         --set-val FRAME_WARN 0
    (cd ${OUT_DIR} && \
     make O=${OUT_DIR} $archsubarch CC=${CC} CROSS_COMPILE=${CROSS_COMPILE} olddefconfig)
}
  • $ build/build.sh -j8

But after CHK include/generated/compile.h I get many undefined reference errors to various asan-symbols, e.g. undefined reference to __asan_alloca_poison.

I did some research and read about adding -fsantitize=address and -shared-libasan (or -shared-libsan) to CFLAGS AND LDFLAGS. I did that (for which I had to hard-code it into build/build.sh, isn't there a more convenient way?), but to no avail:

I ended up with aarch64-linux-android-ld: -f may not be used without -shared.

So I tried reading up on ld's -shared flag and adding it to LDFLAGS (more like a guess really). Resulted in aarch64-linux-android-ld: -r and -shared may not be used together.

Really don't know where to go from here and what's going wrong in general?

Any help really appreciated!


Update: At first it seemed that using gcc instead of clang seemed to resolve the issue. The phone boots up fine, buttons work, but the touchscreen does not respond. I am looking into the reasons...

Viewing all 6382 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>