|
Run CO-RE version's runqslower failed
Hi! runqslower expects that kernel was built with BTF type info (which is enabled by CONFIG_DEBUG_INFO_BTF=y Kconfig option). Can you please re-build your kernel with BTF enabled and try again? > >
Hi! runqslower expects that kernel was built with BTF type info (which is enabled by CONFIG_DEBUG_INFO_BTF=y Kconfig option). Can you please re-build your kernel with BTF enabled and try again? > >
|
By
Andrii Nakryiko
· #1810
·
|
|
Run CO-RE version's runqslower failed
<andrii.nakryiko=gmail.com@...> wrote: Discussion has been moved to https://github.com/iovisor/bcc/issues/2770 > > > > > > >
<andrii.nakryiko=gmail.com@...> wrote: Discussion has been moved to https://github.com/iovisor/bcc/issues/2770 > > > > > > >
|
By
Andrii Nakryiko
· #1811
·
|
|
BCC integration into Buildroot
Have you looked at using libbpf and BPF CO-RE for such use cases? The difference is that you won't have any additional runtime dependencies (no Clang/LLVM, etc), which makes this more suitable for emb
Have you looked at using libbpf and BPF CO-RE for such use cases? The difference is that you won't have any additional runtime dependencies (no Clang/LLVM, etc), which makes this more suitable for emb
|
By
Andrii Nakryiko
· #1814
·
|
|
clang 10 for BPF CO-RE
Thanks! Glad it was useful. For kernel/libbpf development we do build Clang from sources, but you can install it from packages as well. See https://apt.llvm.org/, there are packages for Clang 10 and e
Thanks! Glad it was useful. For kernel/libbpf development we do build Clang from sources, but you can install it from packages as well. See https://apt.llvm.org/, there are packages for Clang 10 and e
|
By
Andrii Nakryiko
· #1821
·
|
|
Extracting data from tracepoints (and anything else)
nit: this is a legacy syntax of specifying BPF maps, please see [0] for some newer examples [0] https://github.com/iovisor/bcc/tree/master/libbpf-tools you don't need to bpf_probe_read() ctx here, you
nit: this is a legacy syntax of specifying BPF maps, please see [0] for some newer examples [0] https://github.com/iovisor/bcc/tree/master/libbpf-tools you don't need to bpf_probe_read() ctx here, you
|
By
Andrii Nakryiko
· #1829
·
|
|
Extracting data from tracepoints (and anything else)
Adding back mailing list. bpf_probe_read_str() has been there for a long time, at least 4.12 or even older. samples/bpf are part of kernel, so yes, they are using libbpf from kernel sources. For stand
Adding back mailing list. bpf_probe_read_str() has been there for a long time, at least 4.12 or even older. samples/bpf are part of kernel, so yes, they are using libbpf from kernel sources. For stand
|
By
Andrii Nakryiko
· #1831
·
|
|
Extracting data from tracepoints (and anything else)
Take a closer look. libbpf-tools do not use bpf_load.h, that one is deprecated and its use is discouraged. libbpf-tools rely on code-generated BPF skeleton. But really, get a close look at libbpf-tool
Take a closer look. libbpf-tools do not use bpf_load.h, that one is deprecated and its use is discouraged. libbpf-tools rely on code-generated BPF skeleton. But really, get a close look at libbpf-tool
|
By
Andrii Nakryiko
· #1833
·
|
|
Extracting data from tracepoints (and anything else)
Yes. A lot of newer functionality relies on kernel BTF as well. But to compile portable BPF program you also need kernel BTF (for BPF CO-RE stuff). >
Yes. A lot of newer functionality relies on kernel BTF as well. But to compile portable BPF program you also need kernel BTF (for BPF CO-RE stuff). >
|
By
Andrii Nakryiko
· #1835
·
|
|
Extracting data from tracepoints (and anything else)
Just answered on another Github issue (https://github.com/iovisor/bcc/issues/2855#issuecomment-609532793), please check it there as well. Short answer: no. Unless you can pretty much guarantee that it
Just answered on another Github issue (https://github.com/iovisor/bcc/issues/2855#issuecomment-609532793), please check it there as well. Short answer: no. Unless you can pretty much guarantee that it
|
By
Andrii Nakryiko
· #1837
·
|
|
Extracting data from tracepoints (and anything else)
adding back mailing list Notice offsets, they are all (except for first 4 fields which fit in first 8 bytes) 8-byte aligned. You can do that in your struct definitions as: int __syscall_nr __attribute
adding back mailing list Notice offsets, they are all (except for first 4 fields which fit in first 8 bytes) 8-byte aligned. You can do that in your struct definitions as: int __syscall_nr __attribute
|
By
Andrii Nakryiko
· #1838
·
|
|
Extracting data from tracepoints (and anything else)
You are not really accessing CPU registers, but you access their values before the program was interrupted. Those values are stored in pt_regs struct. It's a technicality in this case, but you can't a
You are not really accessing CPU registers, but you access their values before the program was interrupted. Those values are stored in pt_regs struct. It's a technicality in this case, but you can't a
|
By
Andrii Nakryiko
· #1840
·
|
|
Building BPF programs and kernel persistence
Hi! For the future, I think cc'ing bpf@... would be a good idea, there are a lot of folks who are probably not watching iovisor mailing list, but could help with issues like this. I'd star
Hi! For the future, I think cc'ing bpf@... would be a good idea, there are a lot of folks who are probably not watching iovisor mailing list, but could help with issues like this. I'd star
|
By
Andrii Nakryiko
· #1849
·
|
|
eBPF map - Control and Data plane concurrency
#bcc
No, HASH_OF_MAPS allows arbitrary-sized keys, just like normal HASHMAP. Libbpf recently got a support for nicer map-in-map declaration and initialization, you might want to check it out: [0]. [0] http
No, HASH_OF_MAPS allows arbitrary-sized keys, just like normal HASHMAP. Libbpf recently got a support for nicer map-in-map declaration and initialization, you might want to check it out: [0]. [0] http
|
By
Andrii Nakryiko
· #1850
·
|
|
Building BPF programs and kernel persistence
<mayfieldtristan@...> wrote: BTW, everyone seems to be using -O2 for compiling BPF programs. Not sure how well-supported -O3 will be. [...] "classic BPF" is entirely different thing, don't use t
<mayfieldtristan@...> wrote: BTW, everyone seems to be using -O2 for compiling BPF programs. Not sure how well-supported -O3 will be. [...] "classic BPF" is entirely different thing, don't use t
|
By
Andrii Nakryiko
· #1852
·
|
|
BPF Concurrency
Stating that spin locks are costly without empirical data seems premature. What's the scenario? What's the number of CPUs? What's the level of contention? Under light contention, spin locks in practic
Stating that spin locks are costly without empirical data seems premature. What's the scenario? What's the number of CPUs? What's the level of contention? Under light contention, spin locks in practic
|
By
Andrii Nakryiko
· #1857
·
|
|
Error loading xdp program that worked with bpf_load
<brouer@...> wrote: This is newer Clang recording that function is global, not static. libbpf is sanitizing BTF to remove this flag, if kernel doesn't support this. But given this is re-impleme
<brouer@...> wrote: This is newer Clang recording that function is global, not static. libbpf is sanitizing BTF to remove this flag, if kernel doesn't support this. But given this is re-impleme
|
By
Andrii Nakryiko
· #1860
·
|
|
Error loading xdp program that worked with bpf_load
Ok, that I can help with, then. What's the kernel version? Where I can find repro? Steps, etc. Basically, a bit more context would help, as I wasn't part of initial discussion.
Ok, that I can help with, then. What's the kernel version? Where I can find repro? Steps, etc. Basically, a bit more context would help, as I wasn't part of initial discussion.
|
By
Andrii Nakryiko
· #1862
·
|
|
BPF Concurrency
You should use __sync_fetch_and_add() for both cases, and then yes, you won't lose any update. You probably would want __sync_add_and_fetch() to get the counter after update, but that's not supported
You should use __sync_fetch_and_add() for both cases, and then yes, you won't lose any update. You probably would want __sync_add_and_fetch() to get the counter after update, but that's not supported
|
By
Andrii Nakryiko
· #1867
·
|
|
How to get function param in kretprobe bpf program?
#bcc
#pragma
You can't do it reliably with kretprobe. kretprobe is executed right before the function is exiting, by that time all the registers that contained input parameters could have been used for something e
You can't do it reliably with kretprobe. kretprobe is executed right before the function is exiting, by that time all the registers that contained input parameters could have been used for something e
|
By
Andrii Nakryiko
· #1885
·
|
|
Polling multiple BPF_MAP_TYPE_PERF_EVENT_ARRAY causing dropped events
If you have the luxury of using Linux kernel 5.8 or newer, you can try a new BPF ring buffer map, that provides MPSC queue (so you can queue from multiple CPUs simultaneously, while BPF perf buffer al
If you have the luxury of using Linux kernel 5.8 or newer, you can try a new BPF ring buffer map, that provides MPSC queue (so you can queue from multiple CPUs simultaneously, while BPF perf buffer al
|
By
Andrii Nakryiko
· #1888
·
|