Quantcast
Channel: Active questions tagged kernel - Stack Overflow
Viewing all 6379 articles
Browse latest View live

What happens when you call puts() at the end of require() definition?

$
0
0

I'm trying to overload the Kernel.require() method to get data required to build code dependency tree. This is how I simply imagine the new require method:

def require arg
  super arg
  puts "including '#{arg}' in '#{caller_locations(1).first.path}'"
end

Unfortunately I found this to be braking the require() invocation somewhere else in the code:

Traceback (most recent call last):
    28: from ./thief:9:in `<main>'
    27: from ./thief:9:in `require_relative'
    26: from /home/siery/devel/eco-sim/lib/thief.rb:12:in `<top (required)>'
    25: from /home/siery/devel/eco-sim/lib/thief.rb:13:in `<module:Thief>'
    24: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    23: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    22: from /home/siery/devel/eco-sim/lib/engine.rb:2:in `<top (required)>'
    21: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    20: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    19: from /home/siery/devel/eco-sim/lib/screen_area.rb:1:in `<top (required)>'
    18: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    17: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    16: from /home/siery/devel/eco-sim/lib/map.rb:2:in `<top (required)>'
    15: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    14: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    13: from /home/siery/devel/eco-sim/lib/debug.rb:1:in `<top (required)>'
    12: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    11: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
    10: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/pry-0.11.3/lib/pry.rb:152:in `<top (required)>'
     9: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
     8: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
     7: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/pry-0.11.3/lib/pry/color_printer.rb:2:in `<top (required)>'
     6: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/pry-0.11.3/lib/pry/color_printer.rb:3:in `<class:Pry>'
     5: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/pry-0.11.3/lib/pry/color_printer.rb:5:in `<class:ColorPrinter>'
     4: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
     3: from /home/siery/devel/eco-sim/lib/thief.rb:5:in `require'
     2: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/coderay-1.1.2/lib/coderay/encoders.rb:1:in `<top (required)>'
     1: from /home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/coderay-1.1.2/lib/coderay/encoders.rb:10:in `<module:CodeRay>'
/home/siery/.rvm/gems/ruby-2.6.2@stable_project/gems/coderay-1.1.2/lib/coderay/encoders.rb:12:in `<module:Encoders>': uninitialized constant CodeRay::Encoders::PluginHost (NameError)
Did you mean?  CodeRay::PluginHos

Jupyter Notebook dead Kernel after Anaconda update

$
0
0

I got the following problem. After update Anaconda because I got an error to import skimage.io to a project in Jupyter Notebook an even a greater error appear - Dead Kernel. Right now I could not ever start a project because the Jupyter gives the following error is:

Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
    http://localhost:8888/? 
token=aaf85a5e18489792c87cce65d7a53a0263cd5c08cc7248b 6
[I 00:55:13.451 NotebookApp] Accepting one-time-token-authenticated 
connection from ::1
[I 00:55:31.607 NotebookApp] Kernel started: 608d2190-59e6-4888-b09a- 
e616f67bd5b4
Traceback (most recent call last):
ERROR:tornado.general:Uncaught exception in ZMQStream callback
Traceback (most recent call last):
File "C:\Program Files\Anaconda3\lib\site- 
packages\zmq\eventloop\zmqstream.py", line 432, in _run_callback
callback(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site- 
packages\tornado\stack_context.py", line 276, in null_wrapper
return fn(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", 
line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "C:\Program Files\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", 
line 233, in dispatch_shell
self.pre_handler_hook()
File "C:\Program Files\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", 
line 248, in pre_handler_hook
self.saved_sigint_handler = signal(SIGINT, default_int_handler)
File "C:\Program Files\Anaconda3\lib\signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread  File "C:\Program 
Files\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)

File "C:\Program Files\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Program Files\Anaconda3\lib\site-packages\ipykernel_launcher.py", 
line 16, in <module>
app.launch_new_instance()
File "C:\Program Files\Anaconda3\lib\site- 
packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\Program Files\Anaconda3\lib\site-packages\ipykernel\kernelapp.py", 
line 477, in start
ioloop.IOLoop.instance().start()
File "C:\Program Files\Anaconda3\lib\site- 
packages\tornado\platform\asyncio.py", line 112, in start
self.asyncio_loop.run_forever()
ERROR:tornado.general:Uncaught exception in zmqstream callback
Traceback (most recent call last):
File "C:\Program Files\Anaconda3\lib\site- 
packages\zmq\eventloop\zmqstream.py", line 450, in _handle_events
self._handle_recv()
File "C:\Program Files\Anaconda3\lib\site- 
packages\zmq\eventloop\zmqstream.py", line 480, in _handle_recv
self._run_callback(callback, msg)
File "C:\Program Files\Anaconda3\lib\site- 
packages\zmq\eventloop\zmqstream.py", line 432, in _run_callback
callback(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site- 
packages\tornado\stack_context.py", line 276, in null_wrapper
return fn(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", 
line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "C:\Program Files\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", 
line 233, in dispatch_shell
self.pre_handler_hook()
File "C:\Program Files\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", 
line 248, in pre_handler_hook
self.saved_sigint_handler = signal(SIGINT, default_int_handler)
File "C:\Program Files\Anaconda3\lib\signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread  File "C:\Program   
Files\Anaconda3\lib\asyncio\base_events.py", line 409, in run_forever

Kernel error after updating Spyder in anaconda

$
0
0

I updated Spyder to version 4.1.0 (together with all other packages in anaconda). Spyder itself works fine however the kernel is not working. I get the following error and can't figure out how to solve it:

An error ocurred while starting the kernel
The error is:

Traceback (most recent call last):
File "C:\Users\20172010\AppData\Local\Continuum\anaconda3\lib\site‑packages\spyder\plugins\ipythonconsole\plugin.py", line 1209, in create_kernel_manager_and_kernel_client
kernel_manager.start_kernel(stderr=stderr_handle, **kwargs)
File "C:\Users\20172010\AppData\Local\Continuum\anaconda3\lib\site‑packages\jupyter_client\manager.py", line 267, in start_kernel
self.kernel = self._launch_kernel(kernel_cmd, env=env, **kw)
File "C:\Users\20172010\AppData\Local\Continuum\anaconda3\lib\site‑packages\jupyter_client\manager.py", line 211, in _launch_kernel
return launch_kernel(kernel_cmd, **kw)
File "C:\Users\20172010\AppData\Local\Continuum\anaconda3\lib\site‑packages\jupyter_client\launcher.py", line 135, in launch_kernel
proc = Popen(cmd, **kwargs)
File "C:\Users\20172010\AppData\Local\Continuum\anaconda3\lib\subprocess.py", line 800, in __init__
restore_signals, start_new_session)
File "C:\Users\20172010\AppData\Local\Continuum\anaconda3\lib\subprocess.py", line 1207, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified

Bind a framebuffer to a plane in DRM framework in Linux

$
0
0

I am using Linux 4.14

CRTC and Framebuffer are parts of the DRM Framework.

In general, a driver needs to create and initialize CRTC (struct drm_crtc) and Framebuffer (struct drm_fbdev_cma in my case).

CRTC stores a pointer to a plane:

struct drm_crtc {
    ...
    struct drm_plane *primary;
    ...
};

A plane stores a pointer to framebuffer:

struct drm_plane {
    ...
    struct drm_framebuffer *fb;
    ...
};
  1. How is the framebuffer assigned to a plane? As I can see in the code, drivers don't make any assignment to crtc->primary->fb. What kernel functions should I use to bind a fb to primary plane? There is a function called drm_crtc_init, but in my case it leaves the plane with NULL in .fb field.

  2. Does this require Framebuffer to be created before CRTC is created?

Please let me know is my understanding of this part of the DRM is not correct.

What is a kernel in Jupyter Notebook and how it is different/similar to actual kernel (related to Operating System)?

PERF report show IPC is 0

$
0
0

In kernel 5+, perf add IPC counting for function http://man7.org/linux/man-pages/man1/perf-report.1.html

my command: perf record -b ./a.out perf report -s symbol

it shows:

# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 160  of event 'cycles'
# Event count (approx.): 160
#
# Overhead  Symbol                                  IPC   [IPC Coverage]
# ........  ......................................  ....................
#
    21.88%  [k] 0000000000000000                    -                   
    10.62%  [k] native_write_msr                    0.00  [  0.0%]      
     9.38%  [k] __intel_pmu_enable_all.constprop.0  0.00  [  0.0%]      
     9.38%  [k] intel_pmu_lbr_enable_all            0.00  [  0.0%]      
     4.38%  [k] strncpy_from_user                   0.00  [  0.0%]      
     3.75%  [k] tlb_flush_mmu                       0.00  [  0.0%]      
     3.12%  [k] __check_object_size                 0.00  [  0.0%]      
     3.12%  [k] do_nmi                              0.00  [  0.0%]      
     2.50%  [k] free_pgd_range                      0.00  [  0.0%]      
     2.50%  [k] vma_interval_tree_remove            0.00  [  0.0%]      
     1.88%  [k] cpumask_any_but                     0.00  [  0.0%]    

why this IPC is 0 ?

Linux kernel AIO, open system call

$
0
0

Why Linux Kernel AIO does not support async 'open' system call? Because 'open' can block on filesystem for long time, cant it?

What does the statement that no side effects do in some Linux kernel macro definitions? [duplicate]

$
0
0

When I read the Linux kernel source code, there are some macro definitions that the last statement in these definetions have no side effect, for example:

#define WRITE_ONCE(x, val) \
({                          \
    union { typeof(x) __val; char __c[1]; } __u =   \
        { .__val = (typeof(x)) (val) }; \
    __write_once_size(&(x), __u.__c, sizeof(x));    \
    __u.__val;                  \
})

or :

#define CHECK_DATA_CORRUPTION(condition, fmt, ...)           \
    check_data_corruption(({                     \
        bool corruption = unlikely(condition);           \
        if (corruption) {                    \
            if (IS_ENABLED(CONFIG_BUG_ON_DATA_CORRUPTION)) { \
                pr_err(fmt, ##__VA_ARGS__);      \
                BUG();                   \
            } else                       \
                WARN(1, fmt, ##__VA_ARGS__);         \
        }                            \
        corruption;                      \
    }))

In the first macro the last line is "__u.__val; " , and in the sencond macro the last line is "corruption;"

Why use the statements like these?


A question about ARM64 cache maintenance code sync_icache_aliases() in linux kernel for non-aliasing icache

$
0
0

In linux kernel on arm64 platform, the flow function sync_icache_aliases() is used to sync i-cache and d-cache. I understand the aliasing case. but for non-aliasing case why it just does "dc cvau" (in __flush_icache_range()) whithout really invalidate the icache? Will i-cache refill from L2 cache?

void sync_icache_aliases(void *kaddr, unsigned long len)
{
    unsigned long addr = (unsigned long)kaddr;

    if (icache_is_aliasing()) {
        __clean_dcache_area_pou(kaddr, len);
        __flush_icache_all();
    } else {
        /*
         * Don't issue kick_all_cpus_sync() after I-cache invalidation
         * for user mappings.
         */
        __flush_icache_range(addr, addr + len);
    }
}

Building the stock kernel for j7max(using samsung opensource)

$
0
0

getting this eroor while building the stock kernel for j7max(g615f) even thought i have set the CROSS_COMPILE properly ?

hraj@hraj-HP-Pavilion-g4-Notebook-PC:~/Desktop/g615f/kernel/Kernel$ 
export ANDROID_MAJOR_VERSION=o
hraj@hraj-HP-Pavilion-g4-Notebook-PC:~/Desktop/g615f/kernel/Kernel$ 
make ARCH=arm64 mt6757- 
j7maxlte_defconfig
drivers/usb/gadget/Kconfig:131:warning: choice value used outside 
its choice group
#
# configuration written to .config
#
hraj@hraj-HP-Pavilion-g4-Notebook-PC:~/Desktop/g615f/kernel/Kernel$ 
make
Makefile:694: Cannot use CONFIG_CC_STACKPROTECTOR_STRONG: -fstack- 
protector-strong not supported by compiler
make: /home/hraj/aarch64-linux-android-4.9/bin/aarch64-linux- 
android-gcc: Command not found
scripts/kconfig/conf  --silentoldconfig Kconfig
drivers/usb/gadget/Kconfig:131:warning: choice value used outside 
its choice group
Makefile:694: Cannot use CONFIG_CC_STACKPROTECTOR_STRONG: -fstack- 
protector-strong not supported by compiler
make: /home/hraj/aarch64-linux-android-4.9/bin/aarch64-linux- 
android-gcc: Command not found
CHK     include/config/kernel.release
CHK     include/generated/uapi/linux/version.h
CHK     include/generated/utsrelease.h
CC      kernel/bounds.s
/bin/sh: 1: /home/hraj/aarch64-linux-android-4.9/bin/aarch64-linux- 
android-gcc: not found
Kbuild:44: recipe for target 'kernel/bounds.s' failed
make[1]: *** [kernel/bounds.s] Error 127
Makefile:1043: recipe for target 'prepare0' failed
make: *** [prepare0] Error 2```[enter link description here][1]

you can have a look at the sourcecode:https://github.com/hraj9258/android_kernel_j7maxlte

Sudden problems with Spyder and Jupyter kernels

$
0
0

I have loaded Jupyter this morning (Paris, Berlin timezone) and it could not connect with the kernel.

I then tried to open Spyder, and I get the following error:

enter image description here

I looked online what it could be the case, but I have no background on this kind of "advanced" configuration.

Is it something related to any newly released update?

I don´t recall modifying anything since yesterday, when it was working with flying colours.

This is the error that prompts on Jupyter:

enter image description here

Thanks in advance for your valuable time.

Hope this post helps others too.

Best regards, Enrique

XGBoost crashing kernel in jupyter notebook

$
0
0

I don't know how to make the XGBoost classifier work. I am running the code below on Jupyter notebook, and it always generates this message "The kernel appears to have died. It will restart automatically."

from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(X, y)

There is no problem with importing the XGBClassifier, but it crashes upon fitting it to my data. X is a 502 by 33 all-numeric dataframe, y is the set of 0 or 1 labels for each row. Does anyone know what could be the problem here? I downloaded the newest version of XGBoost through pip3 install, and also through Conda install.

Thanks!

Graphics: Can the frame buffer be allocated anywhere in memory?

$
0
0

I'm trying to figure out if the frame buffer - not some software concept, but the actual piece of memory holding the final video frame that is scanned out by the hardware to output it on a display - is in a special memory location.

Or is it the case that some memory is allocated at boot time somewhat randomly (maybe with some constraints applied), this memory gets designated as the frame buffer, and then the scan-out hardware is told at what address it resides?

in jupyter notebook Python3 kernel not available in menu "new"

$
0
0

I’m going crazy. help me please. For no reason, in jupyter notebook, from the “new” menu, “Python 3” disappeared (see screenshot). I tried everything possible. reinstalled jupiter, kernel. And of course I tried to create a new user on the system. Today I freaked out and completely reinstalled python and conda… and still python 3 did not appear on the menu

screenshot

> jupyter kernelspec list

Available kernels:
python3 C:\Users\1\AppData\Roaming\jupyter\kernels\python3

> jupyter troubleshoot

$PATH:
C:\ProgramData\Anaconda3\envs\msudev
C:\ProgramData\Anaconda3\envs\msudev\Library\mingw-w64\bin
C:\ProgramData\Anaconda3\envs\msudev\Library\usr\bin
C:\ProgramData\Anaconda3\envs\msudev\Library\bin
C:\ProgramData\Anaconda3\envs\msudev\Scripts
C:\ProgramData\Anaconda3\envs\msudev\bin
C:\ProgramData\Anaconda3\condabin
C:\Program Files\Python38\Scripts
C:\Program Files\Python38
C:\Program Files (x86)\Embarcadero\Studio\20.0\bin
C:\Users\Public\Documents\Embarcadero\Studio\20.0\Bpl
C:\Program Files (x86)\Embarcadero\Studio\20.0\bin64
C:\Program Files (x86)\Common Files\Oracle\Java\javapath
C:\Users\Public\Documents\Embarcadero\Studio\19.0\Bpl
C:\Users\Public\Documents\Embarcadero\Studio\18.0\Bpl
C:\Users\Public\Documents\RAD Studio\10.0\Bpl
C:\Program Files (x86)\Intel\iCLS Client
C:\Program Files\Intel\iCLS Client
C:\Windows\system32
C:\Windows
C:\Windows\System32\Wbem
C:\Windows\System32\WindowsPowerShell\v1.0
C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common
C:\Program Files (x86)\Intel\Intel® Management Engine Components\DAL
C:\Program Files\Intel\Intel® Management Engine Components\DAL
C:\Program Files (x86)\Intel\Intel® Management Engine Components\IPT
C:\Program Files\Intel\Intel® Management Engine Components\IPT
C:\Program Files\Intel\WiFi\bin
C:\Program Files\Common Files\Intel\WirelessCommon
C:\WINDOWS\system32
C:\WINDOWS
C:\WINDOWS\System32\Wbem
C:\WINDOWS\System32\WindowsPowerShell\v1.0
C:\Program Files (x86)\Windows Live\Shared
C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR
C:\Windows\system32
C:\Windows
C:\Windows\System32\Wbem
C:\Windows\System32\WindowsPowerShell\v1.0
C:\Program Files\PuTTY
C:\Program Files\Git\cmd
C:\Program Files\nodejs
C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit
C:\Program Files\dotnet
C:\Program Files\Microsoft SQL Server\130\Tools\Binn
C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn
C:\ProgramData\chocolatey\bin
C:\Program Files\Java\apache-ant-1.10.7\bin
C:\Program Files\Java\jdk1.6.0_18\jre\bin
C:\Program Files\Java\jdk1.8.0_60\bin
C:\Program Files\Java\jdk1.8.0_60\jre\bin
C:\Users\1\AppData\Roaming\npm

sys.path:
C:\ProgramData\Anaconda3\envs\msudev\Scripts
C:\ProgramData\Anaconda3\envs\msudev\python38.zip
C:\ProgramData\Anaconda3\envs\msudev\DLLs
C:\ProgramData\Anaconda3\envs\msudev\lib
C:\ProgramData\Anaconda3\envs\msudev
C:\ProgramData\Anaconda3\envs\msudev\lib\site-packages
C:\ProgramData\Anaconda3\envs\msudev\lib\site-packages\win32
C:\ProgramData\Anaconda3\envs\msudev\lib\site-packages\win32\lib
C:\ProgramData\Anaconda3\envs\msudev\lib\site-packages\Pythonwin

sys.executable:
C:\ProgramData\Anaconda3\envs\msudev\python.exe

sys.version:
3.8.1 (default, Mar 2 2020, 13:06:26) [MSC v.1916 64 bit (AMD64)]

platform.platform():
Windows-10-10.0.18362-SP0

where jupyter:
C:\ProgramData\Anaconda3\envs\msudev\Scripts\jupyter.exe

pip list:
Package Version
------------------ -------------------
attrs 19.3.0
backcall 0.1.0
bleach 3.1.3
certifi 2019.11.28
colorama 0.4.3
decorator 4.4.2
defusedxml 0.6.0
entrypoints 0.3
importlib-metadata 1.5.0
ipykernel 5.1.4
ipython 7.13.0
ipython-genutils 0.2.0
jedi 0.16.0
Jinja2 2.11.1
jsonschema 3.2.0
jupyter-client 6.0.0
jupyter-core 4.6.3
MarkupSafe 1.1.1
mistune 0.8.4
nbconvert 5.6.1
nbformat 5.0.4
notebook 6.0.3
pandocfilters 1.4.2
parso 0.6.2
pickleshare 0.7.5
pip 20.0.2
prometheus-client 0.7.1
prompt-toolkit 3.0.4
Pygments 2.6.1
pyrsistent 0.15.7
python-dateutil 2.8.1
pywin32 227
pywinpty 0.5.7
pyzmq 19.0.0
Send2Trash 1.5.0
setuptools 46.0.0.post20200309
six 1.14.0
terminado 0.8.3
testpath 0.4.4
tornado 6.0.4
traitlets 4.3.3
wcwidth 0.1.8
webencodings 0.5.1
wheel 0.34.2
wincertstore 0.2
zipp 3.1.0

My newly compiled kernel loses Networking in qemu

$
0
0

I compiled a kernel from source : make defconfigmake kvmconfigmake -j 4 After this , i use the resulting bzImage for my qemu command: qemu-system-x86_64 -hda debian.img -kernel bzImage -append "root=/dev/sda console=ttyS0" -nographic -m 4096 -smp 2 --enable-kvm -net user,hostfwd=tcp::10021-:22 -net nic It mounts, and I get a shell and everything, but it loses connectivity. In qemu, it logs : [FAILED] Failed to start Raise network interfaces. See 'systemctl status networking.service' for details. Can someone guide me on this ? I already consulted Linux vanilla kernel on QEMU and networking with eth0 but it does not solve my issue. Also, Im not looking for hardcore qemu-bridge-solutions. Im pretty sure some network drivers are not getting loaded , but I can't figure out how to resolve it. Or maybe Im missing some kernel .config options.


is there any simple code for beginners where i can experiment diff kernels used in gaussian process example in scikit learn to know their functions?

$
0
0

actually i want to understand the kernels used in scikit learn gaussian example but i have zero knowledge about how those kernel behaves and when to use which and i also not getting any sample basic template code where i can use those kernel one by one and understand.The partial code is given below:

X, y = load_mauna_loa_atmospheric_co2()

Kernel with parameters given in GPML book

k1 = 66.0**2 * RBF(length_scale=67.0)  # long term smooth rising trend
k2 = 2.4**2 * RBF(length_scale=90.0) \
    * ExpSineSquared(length_scale=1.3, periodicity=1.0)  # seasonal component
# medium term irregularity
k3 = 0.66**2 \
    * RationalQuadratic(length_scale=1.2, alpha=0.78)
k4 = 0.18**2 * RBF(length_scale=0.134) \
    + WhiteKernel(noise_level=0.19**2)  # noise terms
kernel_gpml = k1 + k2 + k3 + k4

gp = GaussianProcessRegressor(kernel=kernel_gpml, alpha=0,
                              optimizer=None, normalize_y=True)
gp.fit(X, y)

print("GPML kernel: %s" % gp.kernel_)
print("Log-marginal-likelihood: %.3f"
      % gp.log_marginal_likelihood(gp.kernel_.theta))

# Kernel with optimized parameters
k1 = 50.0**2 * RBF(length_scale=50.0)  # long term smooth rising trend
k2 = 2.0**2 * RBF(length_scale=100.0) \
    * ExpSineSquared(length_scale=1.0, periodicity=1.0,
                     periodicity_bounds="fixed")  # seasonal component
# medium term irregularities
k3 = 0.5**2 * RationalQuadratic(length_scale=1.0, alpha=1.0)
k4 = 0.1**2 * RBF(length_scale=0.1) \
    + WhiteKernel(noise_level=0.1**2,
                  noise_level_bounds=(1e-3, np.inf))  # noise terms
kernel = k1 + k2 + k3 + k4

gp = GaussianProcessRegressor(kernel=kernel, alpha=0,
                              normalize_y=True)
gp.fit(X, y)

print("\nLearned kernel: %s" % gp.kernel_)
print("Log-marginal-likelihood: %.3f"
      % gp.log_marginal_likelihood(gp.kernel_.theta))

X_ = np.linspace(X.min(), X.max() + 30, 1000)[:, np.newaxis]
y_pred, y_std = gp.predict(X_, return_std=True)

# Illustration
plt.scatter(X, y, c='k')
plt.plot(X_, y_pred)
plt.fill_between(X_[:, 0], y_pred - y_std, y_pred + y_std,
                 alpha=0.5, color='k')
plt.xlim(X_.min(), X_.max())
plt.xlabel("Year")
plt.ylabel(r"CO$_2$ in ppm")
plt.title(r"Atmospheric CO$_2$ concentration at Mauna Loa")
plt.tight_layout()
plt.show()

What sock_net() does?

$
0
0

I was studying the communication between user-space and kernel by reading a kernel module that does this, but in the code there is a call to the function sock_net() which I didn't understand. I searched a lot but I didn't found any documentation about this particular function, so what this function actually does?

Install Python 3.8 kernel in Google Colaboratory

$
0
0

I try to install a new Python version (3.8) using conda.

!wget -O mini.sh https://repo.anaconda.com/miniconda/Miniconda3-py38_4.8.2-Linux-x86_64.sh
!chmod +x mini.sh
!bash ./mini.sh -b -f -p /usr/local

This works fine. I can call !python script.py to run a 3.8 version.

So, I try my luck with installing another jupyter kernel with Python 3.8 kernel.

!conda install -q -y --prefix /usr/local jupyter
!python -m ipykernel install --name "py38" --user

I check that the kernel is installed.

!jupyter kernelspec list

Then I download the notebook down. Open a text editor to change the kernel specification to

"kernelspec": {
  "name": "py38",
  "display_name": "Python 3.8"
}

This is the same trick that works before, with Javascript, Java, and Golang.

I then upload the edited notebook to Google Drive. Open the notebook in Google Colab. It cannot find the py38 kernel, so it use normal python3 kernel. I run all these cell again.

!wget -O mini.sh https://repo.anaconda.com/miniconda/Miniconda3-py38_4.8.2-Linux-x86_64.sh
!chmod +x mini.sh
!bash ./mini.sh -b -f -p /usr/local
!conda install -q -y --prefix /usr/local jupyter
!python -m ipykernel install --name "py38" --user

It install the Python 3.8 kernel like before. I refresh the browser, to let it connect to the new kernel, hoping it to work like JavaScript, Java, Golang kernel before.

It doesn't work. It cannot connect. Here's the notebook

Any help would be appreciated.

GCC gives "undefined reference" error to extern variable

$
0
0

I know this question has been asked several times, but none of the solutions worked for me. That's why I created a new question.

I'm trying to add the custom charging driver ThunderCharge to my kernel. The commits for the same can be found here: https://github.com/androbada525/Elindir-Kernel/commits/pie-oc-jack-fix

While compiling, I'm getting several "undefined reference to custom_ac_current" errors from a single file drivers/power/qpnp-smbcharger.c. The error log can be found here: https://del.dog/elindir-log-1.txt

The header file - thundercharge_control.h - where the variable custom_ac_current has been declared is included in the file. The variable has been declared using the extern macro in thundercharge_control.h and is defined in thundercharge_control.c.

I can't make out why I'm getting the undefined reference errors even though the definition of the corresponding variable is already included.

Here are links to the files in question:

qpnp-smbcharger.c:https://github.com/androbada525/Elindir-Kernel/blob/pie-oc-jack-fix/drivers/power/qpnp-smbcharger.c

thundercharge_control.h:https://github.com/androbada525/Elindir-Kernel/blob/pie-oc-jack-fix/drivers/power/thundercharge_control.h

thundercharge_control.c:https://github.com/androbada525/Elindir-Kernel/blob/pie-oc-jack-fix/drivers/power/thundercharge_control.c

Here is the kernel source from which the ThunderCharge commits were sourced: https://github.com/varunchitre15/thunderzap_tomato/commits/cm-13.0

How I can retrieve encryption keys for my IPsec/L2TP session?

$
0
0

I'm investigating IPsec protocols stack with wireshark. If I need to decrypt tunnel's traffic, I use ip xfrm state command, which returns all needed stuff. During ip source code investigation, I discovered that encryption keys are retrieved from kernel via NETLINK. So, I was wondering if there is any other way to get this info from kernel bypassing the NETLINK? Perhaps, there is some ioctl to do this. I would like to know where in the kernel code these keys are stored.

Viewing all 6379 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>