Date   

Re: Weird behaviour when updating a hash map from userspace

Yonghong Song
 

On Fri, Jan 15, 2021 at 12:42 PM William Findlay
<williamfindlay@cmail.carleton.ca> wrote:

Hi all.

Currently debugging a very strange behaviour with eBPF hash maps and was wondering if anyone else has run into a similar issue? I am using libbpf-rs with BPF CO-RE and my kernel version is 5.9.14.

My setup: I have a map with some compound key and I am updating it once from userspace using libbpf and once (later) from a BPF program, using the same key both times, but with different values.

Here's the weird part: Somehow both key,value pairs are being stored in the map, according to output from bpftool. Even more bizarre, the value provided from userspace is essentially a "ghost value" the entire time -- all map lookups fail until the map has been updated from a BPF program as described above.

To be clear, the weirdness is two-fold:

Lookup should not fail after updating the map the first time; and
The second value should be overwriting the first one.

After performing both updates, here is the output from bpftool showcasing the weird behaviour:

[{
"key": {
"id": 3069983010007500772,
"device": 48
},
"value": 10
},{
"key": {
"id": 3069983010007500772,
"device": 48
},
"value": 40
}

]
Does your key data structure have padding? Different padding values
will cause different actual keys.

If padding is not an issue in your case, could you construct a test
case (best if no rust involved) so we can take a deep look?
You can file a task to document this issue if you intend to send a test case.
Thanks!


This behaviour also seems to be inconsistent between different maps and yet consistent between different runs. For some maps, I get the expected result and for others I get this weirdness instead.

Is this possibly a bug in the kernel? Any assistance would be greatly appreciated.

Regards,
William
[...]


Weird behaviour when updating a hash map from userspace

williamfindlay@...
 

Hi all.

Currently debugging a very strange behaviour with eBPF hash maps and was wondering if anyone else has run into a similar issue? I am using libbpf-rs with BPF CO-RE and my kernel version is 5.9.14.

My setup: I have a map with some compound key and I am updating it once from userspace using libbpf and once (later) from a BPF program, using the same key both times, but with different values.

Here's the weird part: Somehow both key,value pairs are being stored in the map, according to output from bpftool. Even more bizarre, the value provided from userspace is essentially a "ghost value" the entire time -- all map lookups fail until the map has been updated from a BPF program as described above.

To be clear, the weirdness is two-fold:

  1. Lookup should not fail after updating the map the first time; and
  2. The second value should be overwriting the first one.

After performing both updates, here is the output from bpftool showcasing the weird behaviour:

[{
        "key": {
            "id": 3069983010007500772,
            "device": 48
        },
        "value": 10
    },{
        "key": {
            "id": 3069983010007500772,
            "device": 48
        },
        "value": 40
    }

]

This behaviour also seems to be inconsistent between different maps and yet consistent between different runs. For some maps, I get the expected result and for others I get this weirdness instead.

Is this possibly a bug in the kernel? Any assistance would be greatly appreciated.

Regards,
William


Re: verifier: variable offset stack access question

Yonghong Song
 

On Fri, Dec 25, 2020 at 5:41 PM Andrei Matei <andreimatei1@gmail.com> wrote:

For posterity, I think I can now answer my own question. I suspect
things were different in 2018 (because otherwise I don’t see how the
referenced exchange makes sense); here’s my understanding about the
verifier’s rules for stack accesses today:

There’s two distinct aspects relevant to the use of variable stack offsets:

1) “Direct” stack access with variable offset. This is simply
forbidden; you can’t read or write from a dynamic offset in the stack
because, in the case of reads, the verifier doesn’t know what type of
memory would be returned (is it “misc” data? Is it a spilled
register?) and, in the case of writes, what stack slot’s memory type
should be updated.
Separately, when reading from the stack with a fixed offset, the
respective memory needs to have been initialized (i.e. written to)
before.

2) Passing pointers to the stack to helper functions which will write
through the pointer (such as bpf_probe_read_user()). Here, if the
stack offset is variable, then all the memory that falls within the
possible bounds has to be initialized.
If the offset is fixed, then the memory doesn’t necessarily need to be
initialized (at least not if the helper’s argument is of type
ARG_PTR_TO_UNINIT_MEM). Why the restriction in the variable offset
case? Because, in that case, it cannot be known what memory the helper
will end up initializing; if the verifier pretended that all the
memory within the offset bounds would be initialized then further
reads could leak uninitialized stack memory.
I think your above assessment is kind of correct. For any read/write
to stack in bpf programs, the stack offset must be known so the
verifier knows exactly what the program tries to do. For helpers,
variable length of stack is permitted and the verifier will do
analysis to ensure the stack meets the memory (esp. initialized
memory) requirement as stated in helper proto definition.






Re: verifier: variable offset stack access question

Yonghong Song
 

On Wed, Dec 23, 2020 at 2:21 PM Andrei Matei <andreimatei1@gmail.com> wrote:

Hello Yonghong, all,

I'm curious about a verifier workaround that Yonghong provided two years ago, in this thread.
Brendan Gregg was asking about accessing stack buffers through a register with a variable offset, and Yonghong suggested a memset as a solution:
"
You can initialize the array with ' ' to workaround the issue:
struct data_t data;
uint64_t max = sizeof(data.argv);
const char *argp = NULL;
memset(&data, ' ', sizeof(data));
bpf_probe_read(&argp, sizeof(argp), (void *)&__argv[0]);
uint64_t len = bpf_probe_read_str(&data.argv, max, argp);
len &= 0xffffffff; // to avoid: "math between fp pointer and register errs"
bpf_trace_printk("len: %d\\n", len); // sanity check: len is indeed valid
"

My question is - how does the memset help? I sort of understand the trouble with variable stack access (different regions of the stack can hold memory of different types), and I've looked through the verifier's code but I've failed to get a clue.
I cannot remember details. Here, what "memset" did is to initialize
related bytes in stack to 0. I guess maybe at that point
bpf_probe_read_str requires an initialized memory?

Right now, bpf_probe_read_str does not require initialized memory, so
memset may not be necessary.


As far as actually trying the trick, I've had difficulty importing <string.h> in my bpf program. I'm not working in the context of BCC, so maybe that makes the difference. I've tried zero-ing out my buffer manually, and it didn't seem to change anything. I've had better success allocating my buffer using map memory rather than stack memory, but I'm still curious what a memset could do for me.
A lot of string.h functions are implemented as external functions in
glibc. This won't work for bpf programs as the bpf program is not
linked against glibc. The clang compiler will translate the above
memset to some stores if memset() size is not big enough. Better,
using clang __builtin_memset() so it won't have any relation with
glibc.


Thanks a lot!

- Andrei


Re: verifier: variable offset stack access question

Andrei Matei
 

For posterity, I think I can now answer my own question. I suspect
things were different in 2018 (because otherwise I don’t see how the
referenced exchange makes sense); here’s my understanding about the
verifier’s rules for stack accesses today:

There’s two distinct aspects relevant to the use of variable stack offsets:

1) “Direct” stack access with variable offset. This is simply
forbidden; you can’t read or write from a dynamic offset in the stack
because, in the case of reads, the verifier doesn’t know what type of
memory would be returned (is it “misc” data? Is it a spilled
register?) and, in the case of writes, what stack slot’s memory type
should be updated.
Separately, when reading from the stack with a fixed offset, the
respective memory needs to have been initialized (i.e. written to)
before.

2) Passing pointers to the stack to helper functions which will write
through the pointer (such as bpf_probe_read_user()). Here, if the
stack offset is variable, then all the memory that falls within the
possible bounds has to be initialized.
If the offset is fixed, then the memory doesn’t necessarily need to be
initialized (at least not if the helper’s argument is of type
ARG_PTR_TO_UNINIT_MEM). Why the restriction in the variable offset
case? Because, in that case, it cannot be known what memory the helper
will end up initializing; if the verifier pretended that all the
memory within the offset bounds would be initialized then further
reads could leak uninitialized stack memory.


verifier: variable offset stack access question

Andrei Matei
 

Hello Yonghong, all,

I'm curious about a verifier workaround that Yonghong provided two years ago, in this thread.
Brendan Gregg was asking about accessing stack buffers through a register with a variable offset, and Yonghong suggested a memset as a solution:
"
You can initialize the array with ' ' to workaround the issue:
    struct data_t data;
    uint64_t max = sizeof(data.argv);
    const char *argp = NULL;
    memset(&data, ' ', sizeof(data));
    bpf_probe_read(&argp, sizeof(argp), (void *)&__argv[0]);
    uint64_t len = bpf_probe_read_str(&data.argv, max, argp);
    len &= 0xffffffff; // to avoid: "math between fp pointer and register errs"
    bpf_trace_printk("len: %d\\n", len); // sanity check: len is indeed valid
"

My question is - how does the memset help? I sort of understand the trouble with variable stack access (different regions of the stack can hold memory of different types), and I've looked through the verifier's code but I've failed to get a clue.

As far as actually trying the trick, I've had difficulty importing <string.h> in my bpf program. I'm not working in the context of BCC, so maybe that makes the difference. I've tried zero-ing out my buffer manually, and it didn't seem to change anything. I've had better success allocating my buffer using map memory rather than stack memory, but I'm still curious what a memset could do for me.

Thanks a lot!

- Andrei


[Warning ⚠] Do you understand how to built bpf.file for snort on fedora?

Dorian ROSSE
 

Hello, 


[Warning ⚠] Do you understand how to built bpf.file for snort on fedora?

Thank you in advance, 

I hope the success, 

Regards. 


Dorian Rosse 
Téléchargez Outlook pour Android


Re: High volume bpf_perf_output tracing

Daniel Xu
 

Hi,

Ideally you’d want to do as much work in the kernel as possible. Passing that much data to user space is kind of mis using bpf.

What kind of work are you doing that can only be done in user space?

But otherwise, yeah, if you need perf, you might get more power from a lower level language. C/c++ is one option, you could also check out libbpf-rs if you prefer to write in rust.

Daniel

On Thu, Nov 19, 2020, at 5:56 PM, wes.vaske@gmail.com wrote:
I'm currently working on a python script to trace the nvme driver. I'm
hitting a performance bottleneck on the event callback in python and am
looking for the best way (or maybe a quick and dirty way) to improve
performance.

Currently I'm attaching to a kprobe and 2 tracepoints and using
perf_submit to pass information back to userspace.

When my callback is:
def count_only(cpu, data, size):
event_count += 1

My throughput is ~2,000,000 events per second

When my callback is my full event processing the throughput drops to
~40,000 events per second.

My first idea was to put the event_data in a Queue and have multiple
worker processes handle the parsing. Unfortunately the bcc.Table
classes aren't pickleable. As soon as we start parsing data to put in
the queue we drop down to 150k events per second without even touching
the Queue, just converting data types.

My next idea was to just store the data in memory and process after the
fact (for this use case, I effectively have "unlimited" memory for the
trace). This ranges from 100k to 450k events per second. (I think
python his issues allocating memory quickly with a list.append() and
with tuning I should be able to get 450k sustained). This isn't
terrible but I'd like to be above 1,000,000 events per second.

My next idea was to see if I can attach multiple reader processes to
the same BPF map. This is where I hit the wall and came here. It looks
like there isn't a way to do this with the Python API; at least not
easily.

With that context, I have 2 questions:
1. Is there a way I can attach multiple python processes to the same
BPF map to poll in parallel? Event ordering doesn't matter, I'll just
post process it all anyway. This doesn't need to be a final solution,
just something to get me through the next month
2. What is the "right" way to do this? My primary concern is
increasing the rate at which I can move data from the BPF_PERF_OUTPUT
map to userspace. It looks like the Python API is being deprecated in
favor of libbpf. So I'm assuming a C++ version of this script would be
the "right" way? (I've never touched C/C++ outside the BPF C code so
this would need to be a future project for me)


Thanks!


Re: BPF Maps with wildcards

Yonghong Song
 

On Thu, Nov 19, 2020 at 9:57 AM Marinos Dimolianis
<dimolianis.marinos@gmail.com> wrote:

Thanks for the response.
LPM is actually the closest solution however I wanted a structure closer to the way TCAMs operate in which you can have wildcards also in the interim bits.
I believe that something like that does not exist and I need to implement it using available structures in
eBPF/XDP.

Right, BPF does not have TCAM style maps. If you organize data
structure properly, you may be able to use LPM.


Στις Πέμ, 19 Νοε 2020 στις 5:27 π.μ., ο/η Y Song <ys114321@gmail.com> έγραψε:

On Wed, Nov 18, 2020 at 6:20 AM <dimolianis.marinos@gmail.com> wrote:

Hi all, I am trying to find a way to represent wildcards in BPF Map Keys?
I could not find anything relevant to that, does anyone know anything further.
Are there any efforts towards that functionality?
The closest map is lpm (trie) map. You may want to take a look.


High volume bpf_perf_output tracing

wes.vaske@...
 

I'm currently working on a python script to trace the nvme driver. I'm hitting a performance bottleneck on the event callback in python and am looking for the best way (or maybe a quick and dirty way) to improve performance.

Currently I'm attaching to a kprobe and 2 tracepoints and using perf_submit to pass information back to userspace.

When my callback is:
def count_only(cpu, data, size):
    event_count += 1

My throughput is ~2,000,000 events per second

When my callback is my full event processing the throughput drops to ~40,000 events per second.

My first idea was to put the event_data in a Queue and have multiple worker processes handle the parsing. Unfortunately the bcc.Table classes aren't pickleable. As soon as we start parsing data to put in the queue we drop down to 150k events per second without even touching the Queue, just converting data types.

My next idea was to just store the data in memory and process after the fact (for this use case, I effectively have "unlimited" memory for the trace). This ranges from 100k to 450k events per second. (I think python his issues allocating memory quickly with a list.append() and with tuning I should be able to get 450k sustained). This isn't terrible but I'd like to be above 1,000,000 events per second.

My next idea was to see if I can attach multiple reader processes to the same BPF map. This is where I hit the wall and came here. It looks like there isn't a way to do this with the Python API; at least not easily.

With that context, I have 2 questions:
  1. Is there a way I can attach multiple python processes to the same BPF map to poll in parallel? Event ordering doesn't matter, I'll just post process it all anyway. This doesn't need to be a final solution, just something to get me through the next month
  2. What is the "right" way to do this? My primary concern is increasing the rate at which I can move data from the BPF_PERF_OUTPUT map to userspace. It looks like the Python API is being deprecated in favor of libbpf. So I'm assuming a C++ version of this script would be the "right" way? (I've never touched C/C++ outside the BPF C code so this would need to be a future project for me)


Thanks!


Re: BPF Maps with wildcards

Marinos Dimolianis
 

Thanks for the response.
LPM is actually the closest solution however I wanted a structure closer to the way TCAMs operate in which you can have wildcards also in the interim bits.
I believe that something like that does not exist and I need to implement it using available structures in eBPF/XDP.

Στις Πέμ, 19 Νοε 2020 στις 5:27 π.μ., ο/η Y Song <ys114321@...> έγραψε:

On Wed, Nov 18, 2020 at 6:20 AM <dimolianis.marinos@...> wrote:
>
> Hi all, I am trying to find a way to represent wildcards in BPF Map Keys?
> I could not find anything relevant to that, does anyone know anything further.
> Are there any efforts towards that functionality?

The closest map is lpm (trie) map. You may want to take a look.


Re: BPF Maps with wildcards

Yonghong Song
 

On Wed, Nov 18, 2020 at 6:20 AM <dimolianis.marinos@gmail.com> wrote:

Hi all, I am trying to find a way to represent wildcards in BPF Map Keys?
I could not find anything relevant to that, does anyone know anything further.
Are there any efforts towards that functionality?
The closest map is lpm (trie) map. You may want to take a look.


BPF Maps with wildcards

Marinos Dimolianis
 

Hi all, I am trying to find a way to represent wildcards in BPF Map Keys?
I could not find anything relevant to that, does anyone know anything further.
Are there any efforts towards that functionality?
Regards,
Marinos


Attaching dynamic uprobe to C++ library/application #bcc

harnan@...
 

Hi all,

I am learning about ebpf and the bcc tools/library. I have a question about dynamic uprobe of C++ code. I have been able to attach a uprobe successfully by looking up the mangled symbol name. However, I am curious how the bpf program will access the parameters or arguments of a function I am probing. For a C++ object, do I just create an equivalent C struct that represents the application's C++ object/class, and then typecast the argument (from PT_REGS_PARM[x](ctx)) ?

Thanks!
Siva


Re: Future of BCC Python tools

Alexei Starovoitov
 

On Mon, Oct 26, 2020 at 3:34 PM Brendan Gregg <brendan.d.gregg@gmail.com> wrote:

G'Day all,

I have colleagues working on BCC Python tools (e.g., the recent
enhancement of tcpconnect.py) and I'm wondering, given libbpf tools,
what our advice should be.

- Should we keep both Python and libbpf tools in sync?
- Should we focus on libbpf only, and leave Python versions for legacy systems?
bcc python is still used by many where they need on the fly compilation.
Such cases still exist. One example is USDT support.
The libbpf and CO-RE support for USDT is still wip.
So such cases have to continue using bcc style with llvm.
The number of such cases is gradually reducing.
I think right now anyone who starts with bpf should be all set with
libbpf, BTF and CO-RE. It's much better suited for embedded setups too.
So I think bcc as a go-to place is still a great framework, but adding
a new python based tool is probably not the best investment of time
for the noobs. Experiences folks who already learned py-bcc will
keep hacking their scripts in python. That's all great.
noobs should probably learn bpftrace for quick experiments
and libbpf-tools for standalone long-term tried-and-true tools.

Should we keep libbpf-tools and py-bcc tools in sync?
I think py tools where libbpf-tools replacement is complete could be
moved into 'deprecated' directory and not installed by default.
All major distros are built with CONFIG_DEBUG_INFO_BTF=y
so the users won't be surprised. Their favorite tools will keep
working as-is. The underlying implementation of them will quietly change.
We can document it of course, but who reads docs.


Future of BCC Python tools

Brendan Gregg
 

G'Day all,

I have colleagues working on BCC Python tools (e.g., the recent
enhancement of tcpconnect.py) and I'm wondering, given libbpf tools,
what our advice should be.

- Should we keep both Python and libbpf tools in sync?
- Should we focus on libbpf only, and leave Python versions for legacy systems?

I like the tweak-ability of the Python tools: sometimes I'm on a
production instance and I'll copy a tool and edit it on the fly. That
won't work with libbpf. Although, we also install all the bpftrace
tools on our prod instances [0], and if I'm editing tools I start with
them.

However, the llvm dependency of the Python tools is a pain, and an
obstacle for making bcc tools a default install with different
distros. I could imagine having a selection of the top 10 libbpf tools
as a package (bcc-essentials), which would be about 1.5 Mbytes (last
time I did libbpf tool it was 150 Kbytes stripped), and getting that
installed by default by different distros. (Ultimately, I want a
lightweight bpftrace installed by default as well.)

So, I guess I'm not certain about the future of the BCC Python tools.
What do people think? If we agree that the Python tools are legacy, we
should update the README to let everyone know.

Note: I'm just talking about the tools (tools/*.py)! I imagine BCC
Python is currently used for many other BPF things, and I'm not
suggesting that go away.

Brendan

[0] https://github.com/Netflix-Skunkworks/bpftoolkit


execveat tracepoints issues

alessandro.gario@...
 

Hello everyone!

I am experiencing some issues with the execveat tracepoints, and was wondering if others could reproduce it or help me understand what I am doing wrong.

On Arch Linux (kernel 5.9.1, perf 5.7.g3d77e6a8804a), both sys_enter_execveat and sys_exit_execveat never seem to report any event.

On Ubuntu 20.04 (kernel 5.4.0, perf 5.4.65), sys_enter_execveat will work provided there is no one else making use of that tracepoint, while sys_exit_execveat is always completely silent.

I traced the program I am using to test this with strace and verified that execveat is being called correctly. The following is the source code for that program:

---
#include <unistd.h>
#include <linux/fcntl.h>
#include <linux/unistd.h>

int main() {
syscall(__NR_execveat, AT_FDCWD,
"/usr/bin/bash", nullptr,
nullptr, 0);

return 0;
}
---

Here's a recording of what I'm experiencing on Ubuntu: https://asciinema.org/a/6EiDfoOpK1AYcDm7aPftrYqdo

Thanks for your help!

Alessandro Gario


Re: Minimum LLVM version for bcc

Yonghong Song
 

On Wed, Oct 21, 2020 at 8:57 AM Dale Hamel <daleha@gmail.com> wrote:

Does the LLVM version used by bcc matter, for packaging purposes?
This is a good question. For packaging purpose, no, it does not matter
much. The people who builds package can choose whatever it is
available to them to package. bcc is supposed to work for all major
llvm releases since llvm 3.7.


I assume bcc includes some static libraries from LLVM, so I'm curious if the older versions are acceptable. For instance, on ubuntu 16.04, we use LLVM 3.7, but on ubuntu 18.04 and 20.04 it uses LLVM 6.0, based on the current debian control file.
This is probably due to historical reason.


Are there features of newer LLVM releases that we need? For example, does BTF require a specific minimum version of LLVM? If this is the case, perhaps we should update the dependency descriptions in the debian control file to reflect this.
for BTF support, best is >= llvm10. For testing purpose, we may still
want to keep an option to build with old llvm's.


Minimum LLVM version for bcc

Dale Hamel
 

Does the LLVM version used by bcc matter, for packaging purposes?

I assume bcc includes some static libraries from LLVM, so I'm curious if the older versions are acceptable. For instance, on ubuntu 16.04, we use LLVM 3.7, but on ubuntu 18.04 and 20.04 it uses LLVM 6.0, based on the current debian control file.

Are there features of newer LLVM releases that we need? For example, does BTF require a specific minimum version of LLVM? If this is the case, perhaps we should update the dependency descriptions in the debian control file to reflect this.

-Dale


Re: [Ext] Re: [iovisor-dev] Questions about current eBPF usages

Jiada Tu
 

Thank you very much, Yonghong! Those are very helpful.

1 - 20 of 1943