Date   

Re: Reading Pinned maps in eBPF Programs

Andrii Nakryiko
 

On Thu, Aug 20, 2020 at 5:35 AM Ian <icampbe14@...> wrote:

Interestingly enough I am using clang version 10.0.0! Even with that creating a structure from the examples like so:

struct {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(max_entries, 1);
__type(key, __u32);
__type(value, __u32);
__uint(pinning, LIBBPF_PIN_BY_NAME);
} pid_map SEC(".maps");

I still get: libbpf: BTF is required, but is missing or corrupted.
Your BPF code must be relying on CO-RE. I can check if you can show me
your BPF source code.

The pinning and map definition itself doesn't rely on CO-RE and thus
doesn't need kernel BTF.


Here is my clang version output:

vagrant@vagrant:/vagrant$ clang -v
clang version 10.0.0-4ubuntu1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/9
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/9
Selected GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/9
Candidate multilib: .;@m64
Candidate multilib: 32;@m32
Candidate multilib: x32;@mx32
Selected multilib: .;@m64

I will continue looking into new clang versions to see if mine is slightly out of date!



Re: Reading Pinned maps in eBPF Programs

Ian
 

Interestingly enough I am using clang version 10.0.0! Even with that creating a structure from the examples like so:

struct {     
    __uint(type, BPF_MAP_TYPE_HASH);
    __uint(max_entries, 1);
    __type(key, __u32);
    __type(value, __u32);
    __uint(pinning, LIBBPF_PIN_BY_NAME);
} pid_map SEC(".maps");
 
I still get: libbpf: BTF is required, but is missing or corrupted.

Here is my clang version output: 

vagrant@vagrant:/vagrant$ clang -v
clang version 10.0.0-4ubuntu1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/9
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/9
Selected GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/9
Candidate multilib: .;@m64
Candidate multilib: 32;@m32
Candidate multilib: x32;@mx32
Selected multilib: .;@m64

I will continue looking into new clang versions to see if mine is slightly out of date!



Re: Reading Pinned maps in eBPF Programs

Andrii Nakryiko
 

On Wed, Aug 19, 2020 at 3:40 PM Ian <icampbe14@...> wrote:

Libbpf supports declarative pinning of maps, that's how you easily get
"map re-use" from BPF side. See [0] for example.

These examples are exactly what I am looking for but it appears that they either require BTF activated in the kernel or require a 5.8 kernel. Unfortunately I am targeting the new Ubuntu 20.04 system with "out-of-the-box" configurations. So that means I am saddled with kernel v5.4 and BTF not active. Why does libbpfs declarative map pinning require BTF? Does the metadata within BTF support the ability to correctly find and open the map?
It doesn't require kernel BTF for that. Only BPF program's BTF
generated by Clang. So you'll need something like Clang 10 (or maybe
Clang 9 will do as well), but no requirements for kernel BTF.


Re: Reading Pinned maps in eBPF Programs

Ian
 

Libbpf supports declarative pinning of maps, that's how you easily get
"map re-use" from BPF side. See [0] for example.
These examples are exactly what I am looking for but it appears that they either require BTF activated in the kernel or require a 5.8 kernel. Unfortunately I am targeting the new Ubuntu 20.04 system with "out-of-the-box" configurations. So that means I am saddled with kernel v5.4 and BTF not active. Why does libbpfs declarative map pinning require BTF? Does the metadata within BTF support the ability to correctly find and open the map?


Re: Reading Pinned maps in eBPF Programs

Andrii Nakryiko
 

On Mon, Aug 17, 2020 at 6:36 AM Ian <icampbe14@...> wrote:

You can use bpf_obj_get() API to get a reference to the pinned map.

It was my understanding that bpf_obj_get was intended to be used as a user space API. I am looking to "open" or obtain a reference to a map in the actual eBPF program that is loaded into the kernel space. My eBPF programs do include linux/bpf.h but not the uapi bpf.h. Can/should you use it in the actual BPF program? Or is there an a different way to achieve this?
Libbpf supports declarative pinning of maps, that's how you easily get
"map re-use" from BPF side. See [0] for example.

But there is also bpf_map__pin() and bpf_map__reuse_fd() API on
user-space side to set everything up, if you need to do it more
flexibly.

[0] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/progs/test_pinning.c

I have seen a function called bpf_obj_get_user in linux/bpf.h but I cannot find any documentation on it. It also just returns an unsupported error in my kernel's source code.

static inline int bpf_obj_get_user(const char __user *pathname, int flags)

{

return -EOPNOTSUPP;

}

BPF_ANNOTATE_KV_PAIR is old way to provide map key/value types, mostly
for pretty print. bcc still uses it. libbpf can use more advanced
mechanisms with direct .maps section attribute.s

Ahh interesting!


Re: Reading Pinned maps in eBPF Programs

Ian
 

You can use bpf_obj_get() API to get a reference to the pinned map.

It was my understanding that bpf_obj_get was intended to be used as a user space API. I am looking to "open" or obtain a reference to a map in the actual eBPF program that is loaded into the kernel space. My eBPF programs do include linux/bpf.h but not the uapi bpf.h. Can/should you use it in the actual BPF program? Or is there an a different way to achieve this?

I have seen a function called bpf_obj_get_user in linux/bpf.h but I cannot find any documentation on it. It also just returns an unsupported error in my kernel's source code. 

static inline int bpf_obj_get_user(const char __user *pathname, int flags)

{

        return -EOPNOTSUPP;

}

BPF_ANNOTATE_KV_PAIR is old way to provide map key/value types, mostly
for pretty print. bcc still uses it. libbpf can use more advanced
mechanisms with direct .maps section attribute.s
Ahh interesting! 


Re: Reading Pinned maps in eBPF Programs

Yonghong Song
 

On Fri, Aug 14, 2020 at 12:05 PM Ian <icampbe14@...> wrote:

Hello BPF Community!

Hope you are all doing well. I am trying to have a user space program create a BPF Hash map with a single element containing its PID. This map could then be read by all the BPF programs loaded by the user space program. Any event the BPF programs would handle would first compare the PID with the user space program. If the PIDs matched (this is a single threaded application) the event will be thrown out to eliminate events being processed that are from the user space programs own feedback. I was doing some research into this and found a similar post here: https://lists.iovisor.org/g/iovisor-dev/message/1389?p=,,,20,0,0,0::Created,,Userspace+Maps,20,2,0,23673879 that discusses the possibility of this in C++ and BCC. I am curious as to how this could be possible using the standard BPF functions and Libbpf library on Ubuntu 20.04 and Linux Kernel v5.4. NOTE: BTF is not currently compiled into this kernel.

I have created and pinned the map in my user space program like this:

char map_name[] = "pid_map";

int fd = bpf_create_map_name(BPF_MAP_TYPE_HASH, &map_name, sizeof(u32), sizeof(u32), 1, 0) };

u32 key = 1;

bpf_map_update_elem(fd, &key, &PID, BPF_ANY);

char pid_map_path[] = "/sys/fs/bpf/pid_map";

bpf_obj_pin(fd, &pid_map_path);

NOTE: Error checking and some syntax stuff was removed for brevity.

In my BPF programs I know I cannot "open" a map using bpf_obj_open. Therefore, I need a reference. I looked into the link provided above, essentially in the BPF program all they did was define the map as an extern map def. So I reproduced this in my BPF program like this:
You can use bpf_obj_get() API to get a reference to the pinned map.


u32 *pid = bpf_map_lookup_elem(&pid_map, &key);
extern struct bpf_map_def pid_map;

To see if the BPF Loading process would catch the matching map names. Interestingly this would result in a libbpf error:
libbpf: failed to find BTF for extern 'pid_map': -3

Looking at this error message it would appear that there is a way to get this kind of functionality using BTF. The error message to implies that some sort of BTF metadata is being searched in some location to match the extern map I have declared. Knowing this I am curious as to how I can create a reference for multiple BPF programs that could read the data in the pid_map to prevent feedback issues. I have looked into libbpf and the standard BPF.h functions but couldn't really find anything that seemed plausible. One thing I did see and am also curious about is the usage of BPF_ANNOTATE_KV_PAIR. This macro seemed like a possibility but my lack of understanding of BTF has not been able to confirm it. I also wasn't sure if using bpf_helpers.h in a user space program was ideal.
BPF_ANNOTATE_KV_PAIR is old way to provide map key/value types, mostly
for pretty print. bcc still uses it. libbpf can use more advanced
mechanisms with direct .maps section attribute.



Thank you so much in advance for any response! I really have been amazed at how responsive the community is here. You all have helped me learn so much about BPF!

Ian


Reading Pinned maps in eBPF Programs

Ian
 

Hello BPF Community! 

Hope you are all doing well. I am trying to have a user space program create a BPF Hash map with a single element containing its PID. This map could then be read by all the BPF programs loaded by the user space program. Any event the BPF programs would handle would first compare the PID with the user space program. If the PIDs matched (this is a single threaded application) the event will be thrown out to eliminate events being processed that are from the user space programs own feedback. I was doing some research into this and found a similar post here: https://lists.iovisor.org/g/iovisor-dev/message/1389?p=,,,20,0,0,0::Created,,Userspace+Maps,20,2,0,23673879 that discusses the possibility of this in C++ and BCC. I am curious as to how this could be possible using the standard BPF functions and Libbpf library on Ubuntu 20.04 and Linux Kernel v5.4. NOTE: BTF is not currently compiled into this kernel. 

I have created and pinned the map in my user space program like this: 

    char map_name[] = "pid_map";

    int fd = bpf_create_map_name(BPF_MAP_TYPE_HASH, &map_name, sizeof(u32), sizeof(u32), 1, 0) };

    u32 key = 1;

    bpf_map_update_elem(fd, &key, &PID, BPF_ANY);

    char pid_map_path[] = "/sys/fs/bpf/pid_map";

    bpf_obj_pin(fd, &pid_map_path);

NOTE: Error checking and some syntax stuff was removed for brevity.

In my BPF programs I know I cannot "open" a map using bpf_obj_open. Therefore, I need a reference. I looked into the link provided above, essentially in the BPF program all they did was define the map as an extern map def. So I reproduced this in my BPF program like this:

u32 *pid = bpf_map_lookup_elem(&pid_map, &key);
extern struct bpf_map_def pid_map;

To see if the BPF Loading process would catch the matching map names. Interestingly this would result in a libbpf error: 
libbpf: failed to find BTF for extern 'pid_map': -3

Looking at this error message it would appear that there is a way to get this kind of functionality using BTF. The error message to implies that some sort of BTF metadata is being searched in some location to match the extern map I have declared. Knowing this I am curious as to how I can create a reference for multiple BPF programs that could read the data in the pid_map to prevent feedback issues. I have looked into libbpf and the standard BPF.h functions but couldn't really find anything that seemed plausible. One thing I did see and am also curious about is the usage of BPF_ANNOTATE_KV_PAIR. This macro seemed like a possibility but my lack of understanding of BTF has not been able to confirm it. I also wasn't sure if using bpf_helpers.h in a user space program was ideal. 


Thank you so much in advance for any response! I really have been amazed at how responsive the community is here. You all have helped me learn so much about BPF! 

Ian


Re: Polling multiple BPF_MAP_TYPE_PERF_EVENT_ARRAY causing dropped events

Andrii Nakryiko
 

On Wed, Aug 12, 2020 at 5:38 AM Ian <icampbe14@...> wrote:

If you have the luxury of using Linux kernel 5.8 or newer, you can try
a new BPF ring buffer map, that provides MPSC queue (so you can queue
from multiple CPUs simultaneously, while BPF perf buffer allows you to
only enqueue on your current CPU). But what's more important for you,
libbpf's ring_buffer interface allows you to do exactly what you need:
poll multiple independent ring buffers simultaneously from a single
epoll FD. See [0] for example of using that API in user-space, plus
[1] for corresponding BPF-side code.

But having said that, we should probably extend libbpf's perf_buffer
API to support similar use cases. I'll try to do this some time soon.

[0] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c#L54-L62
[1] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c

Unfortunately my project is currently targeting Ubuntu 20.04 which ships with linux kernel version 5.4. It is a shame because the new ring buffer interface looks excellent! That said, would you still suggest we use the perf functionality? Or is this currently an incorrect usage? (More on possible changes below)
No perf buffer is just fine to pass data from the BPF program in the
kernel to the user-space part for post-processing.


Yes, after your handle_event() callback returns, libbpf marks that
sample as consumed and the space it was taking is now available for
new samples to be enqueued. You are right, though, that by increasing
the size of each per-CPU perf ring buffer, you'll delay the drops,
because now you can accumulate more samples in the ring before the
ring buffer is full.

When you say delay the drops, do you mean that the threshold for dropping events is larger? So if I made my page size 256, would that make it far less likely to receive dropped events all together? What would a suggested page size be? I initially thought 16 seemed like plenty, but I haven't found any research to support this. Will I always lose some events? Because that is the behavior I am witnessing right now. It seems like I always eventually start to lose events. Some of this might be due to a feedback loop where my BPF program that monitors file opens collects events triggered by my user space program. I was thinking about using a BPF map that is written by my user space program containing its PID and having all my BPF programs read that map and not write any corresponding events with matching PIDs. Any advice or thoughts on this would be appreciated!
It's hard to give you any definitive answer, it all depends. But think
about this. Perf buffer is a queue. Let's say that your per-CPU buffer
size is 1MB, each of your samples is say 1KB. What does that mean? It
means that at any given time you can't have at most 1024 samples
enqueued. So, if your BPF program in the kernel generates those 1024
samples faster than the user-space side consumes them, then you'll
have drops. So you have many ways to reduce drops:

1. Generate events at the lower rate. E.g., add sampling, filter
unuseful events, etc. This will give user-space side time to consume.
2. Speed up user-space. Many things can influence this. You can do
less work per item. You can ensure you start reacting to items sooner
by increasing priority of your consumer thread and/or pin it to a
dedicated CPU, etc.
3. Reduce the size of the event. If you can reduce sample size from
1KB to 512B by more effective data encoding or dropping unnecessary
data, you suddenly will be able to produce up to 2048 events before
running out of space. That will give your user-space more time to
consume data.
4. Increase per-CPU buffer size. Going from 1MB to 2MB will have the
same effect as reducing sample size from 1KB to 512B, again,
increasing the capacity of your buffer and thus giving more time to
consumer data.

Hope that makes sense and helps showing that I can't answer your
questions, you'll need to do analysis on your own based on your
specific implementation and problem domain.

Some of the event loss might also be attributed to the inefficiencies of my looping mechanism. Although I think the feedback loop might be the bigger culprit. I am thinking about following the Sysdig approach, which is to have a single perf buffer that is used by all my BPF programs (16 in total). This would remove the loop and eliminate all but 1 perf buffer. I would think that would be more efficient because I am removing 15 perf buffers and their epoll_waits. Then I would use a ID member of each passed data structure to properly read the data.
Yes, that would be a good approach. It's better to have 16x bigger
single perf_buffer shared across all BPF programs, than 16 separate
smaller perf buffers. Because you can absorb event spikes more
effectively.

One way I can help you, if you do need to have multiple
PERF_EVENT_ARRAY maps that you need to consume, is to add perf_buffer
APIs similar to ring_buffer that would allow to epoll all of them
simultaneously. Let me know if you are interested. That will
effectively eliminate your outer (LIST_FOREACH(evt, &event_head,
list)), you'll be just doing while(true) perf_buffer__poll() across
all perf buffers simultaneously. But single perf_buffer allows you to
do the same, effectively.




Re: Polling multiple BPF_MAP_TYPE_PERF_EVENT_ARRAY causing dropped events

Ian
 

If you have the luxury of using Linux kernel 5.8 or newer, you can try
a new BPF ring buffer map, that provides MPSC queue (so you can queue
from multiple CPUs simultaneously, while BPF perf buffer allows you to
only enqueue on your current CPU). But what's more important for you,
libbpf's ring_buffer interface allows you to do exactly what you need:
poll multiple independent ring buffers simultaneously from a single
epoll FD. See [0] for example of using that API in user-space, plus
[1] for corresponding BPF-side code.

But having said that, we should probably extend libbpf's perf_buffer
API to support similar use cases. I'll try to do this some time soon.

[0] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c#L54-L62
[1] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
Unfortunately my project is currently targeting Ubuntu 20.04 which ships with linux kernel version 5.4. It is a shame because the new ring buffer interface looks excellent! That said, would you still suggest we use the perf functionality? Or is this currently an incorrect usage? (More on possible changes below)
Yes, after your handle_event() callback returns, libbpf marks that
sample as consumed and the space it was taking is now available for
new samples to be enqueued. You are right, though, that by increasing
the size of each per-CPU perf ring buffer, you'll delay the drops,
because now you can accumulate more samples in the ring before the
ring buffer is full.
When you say delay the drops, do you mean that the threshold for dropping events is larger? So if I made my page size 256, would that make it far less likely to receive dropped events all together? What would a suggested page size be? I initially thought 16 seemed like plenty, but I haven't found any research to support this. Will I always lose some events? Because that is the behavior I am witnessing right now. It seems like I always eventually start to lose events. Some of this might be due to a feedback loop where my BPF program that monitors file opens collects events triggered by my user space program. I was thinking about using a BPF map that is written by my user space program containing its PID and having all my BPF programs read that map and not write any corresponding events with matching PIDs. Any advice or thoughts on this would be appreciated!

Some of the event loss might also be attributed to the inefficiencies of my looping mechanism. Although I think the feedback loop might be the bigger culprit. I am thinking about following the Sysdig approach, which is to have a single perf buffer that is used by all my BPF programs (16 in total). This would remove the loop and eliminate all but 1 perf buffer. I would think that would be more efficient because I am removing 15 perf buffers and their epoll_waits. Then I would use a ID member of each passed data structure to properly read the data. 


Re: How to get function param in kretprobe bpf program? #bcc #pragma

Andrii Nakryiko
 

On Sun, Aug 9, 2020 at 8:10 PM <forrest0579@...> wrote:

On Fri, Aug 7, 2020 at 11:31 AM, Andrii Nakryiko wrote:

You can't do it reliably with kretprobe. kretprobe is executed right
before the function is exiting, by that time all the registers that
contained input parameters could have been used for something else. So
you got lucky with struct sock * here, but as a general rule you
shouldn't rely on this. You either have to pair kprobe with kretprobe
and store input arguments, or take a look at fexit program type, it is
just like kretprobe, but faster and guarantees input arguments are
preserved.

Thanks for reply.
It seems fexit it a new feature and I'm using linux v4.15, so fexit can't help here.
kretprobe with kprobe is an option and I've found a lot examples in bbc, but I am also wondering if it is always right to use pid_tgid as key to store params and get it from kretprobe.
I am wondering if there is a chance that following case would happen:

0. attach kprobe program in tcp_set_state, store params in HASHMAP using pid_tgid as key; attach kretprobe in tcp_set_state, lookup params using pid_tgid
1. kprobe program triggered twice with same pid_tgid before kretprobe executed, so can only get the last params

I have this concern because I'm using golang and the two goroutines may map to one thread in kernel. If one goroutine gets interrupted when executing tcp_set_state, another one would have a chance to execute tcp_set_state with the same pid_tgid.
I don't think golang can interrupt thread while it's being executed in
the kernel. So from the golang perspective I wouldn't worry, the
kernel will execute both kprobe and corresponding kretprobe before
golang runtime can do anything about that. But in general, it's
possible to attach kprobe to a kernel function that could be called
multiple times for a given thread, at which point pid_tgid won't be
enough. This cannot happen for syscalls and many other kernel
functions, though. I would imagine that's not the case for
tcp_set_state either. But please double check kernel sources to be
absolutely sure.


Re: Polling multiple BPF_MAP_TYPE_PERF_EVENT_ARRAY causing dropped events

Andrii Nakryiko
 

On Mon, Aug 10, 2020 at 5:22 AM Ian <icampbe14@...> wrote:

[Edited Message Follows]

The project I am working on generically loads BPF object files, pins their respective maps, and then proceeds to use perf_buffer__poll from libbpf to poll the maps. I currently am polling the multiple maps this way after loading and setting everything else up:

while(true) {
LIST_FOREACH(evt, &event_head, list) {
if(evt->map_loaded == 1) {
err = perf_buffer__poll(evt->pb, 10);
if(err < 0) {
break;
}
}
}
}

Where a evt is a structure that looks like:

struct evt_struct {
char * map_name;
FILE * fp;
int map_loaded;
...<some elements removed for clarity>...
struct perf_buffer * pb;
LIST_ENTRY(evt_struct) list;
};

Essentially each event (evt) in this program correlates to a BPF program. I am looping through the events and calling perf_buffer__poll for each of them. This doesn't seem efficient and to me it makes the epoll_wait that perf_buffer__poll calls loose any of its efficiencies by looping through the events before hand. In perf_buffer__poll epoll is used to poll each CPU. Is there a more efficient way to poll multiple maps like this? Does it involve dropping perf? I don't like that I have to make a separate epoll context for each BPF program I am going to poll, that just checks the CPUs. It would be better if I just had two sets for epoll to monitor, but then I would lose the built in perf functionality. More than just being efficient my current polling implementation drops a significant number of events (i.e. the lost event callback in the perf options is called). This is the issue that really must be fixed. I have some ideas that might be worth trying but I wanted to ascertain more information before I do any substantial refactoring:

1) I was thinking about dropping perf and just using another BPF map type (Hash, Array) to pass elements back to user space then using a standard epoll context to monitor all the maps FD. I wouldn't lose any events that way (or if I did I would never know). But I have read in various books that perf maps are the ideal way to send data to user space...
If you have the luxury of using Linux kernel 5.8 or newer, you can try
a new BPF ring buffer map, that provides MPSC queue (so you can queue
from multiple CPUs simultaneously, while BPF perf buffer allows you to
only enqueue on your current CPU). But what's more important for you,
libbpf's ring_buffer interface allows you to do exactly what you need:
poll multiple independent ring buffers simultaneously from a single
epoll FD. See [0] for example of using that API in user-space, plus
[1] for corresponding BPF-side code.

But having said that, we should probably extend libbpf's perf_buffer
API to support similar use cases. I'll try to do this some time soon.

[0] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c#L54-L62
[1] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c


2) Do perf maps or their buffer pages (for the mmap ring buffer) get cleaned up automatically? When do analyzed entries get removed? I tried increasing the page size of my perf buffer and it just took longer for me to start getting lost events. Which almost suggests I am leaking memory. Am I using perf incorrectly? Each perf buffer is created by:

pb_opts.sample_cb = handle_events;
pb_opts.lost_cb = handle_lost_events;
evt->pb = perf_buffer__new(map_fd, 16, &pb_opts); // Where the map_fd is received from a bpf_object_get call
Yes, after your handle_event() callback returns, libbpf marks that
sample as consumed and the space it was taking is now available for
new samples to be enqueued. You are right, though, that by increasing
the size of each per-CPU perf ring buffer, you'll delay the drops,
because now you can accumulate more samples in the ring before the
ring buffer is full.


Any help or advice would be appreciated!

- Ian


Polling multiple BPF_MAP_TYPE_PERF_EVENT_ARRAY causing dropped events

Ian
 
Edited

The project I am working on generically loads BPF object files, pins their respective maps, and then proceeds to use perf_buffer__poll from libbpf to poll the maps. I currently am polling the multiple maps this way after loading and setting everything else up:

        while(true) {
            LIST_FOREACH(evt, &event_head, list) {
                if(evt->map_loaded == 1) {
                    err = perf_buffer__poll(evt->pb, 10);
                    if(err < 0) {
                        break;
                    }
                }
            }
        }

Where a evt is a structure that looks like:

struct evt_struct {
    char * map_name;
    FILE * fp;
    int map_loaded;
    ...<some elements removed for clarity>...
    struct perf_buffer * pb;
    LIST_ENTRY(evt_struct) list;
};

Essentially each event (evt) in this program correlates to a BPF program. I am looping through the events and calling perf_buffer__poll for each of them. This doesn't seem efficient and to me it makes the epoll_wait that perf_buffer__poll calls loose any of its efficiencies by looping through the events before hand. In perf_buffer__poll epoll is used to poll each CPU. Is there a more efficient way to poll multiple maps like this? Does it involve dropping perf? I don't like that I have to make a separate epoll context for each BPF program I am going to poll, that just checks the CPUs. It would be better if I just had two sets for epoll to monitor, but then I would lose the built in perf functionality. More than just being efficient my current polling implementation drops a significant number of events (i.e. the lost event callback in the perf options is called). This is the issue that really must be fixed.  I have some ideas that might be worth trying but I wanted to ascertain more information before I do any substantial refactoring: 

1) I was thinking about dropping perf and just using another BPF map type (Hash, Array) to pass elements back to user space then using a standard epoll context to monitor all the maps FD. I wouldn't lose any events that way (or if I did I would never know). But I have read in various books that perf maps are the ideal way to send data to user space...

2) Do perf maps or their buffer pages (for the mmap ring buffer) get cleaned up automatically? When do analyzed entries get removed? I tried increasing the page size of my perf buffer and it just took longer for me to start getting lost events. Which almost suggests I am leaking memory. Am I using perf incorrectly? Each perf buffer is created by:

pb_opts.sample_cb = handle_events;
pb_opts.lost_cb = handle_lost_events;
evt->pb = perf_buffer__new(map_fd, 16, &pb_opts); // Where the map_fd is received from a bpf_object_get call

Any help or advice would be appreciated!

- Ian
 


Re: How to get function param in kretprobe bpf program? #bcc #pragma

Forrest Chen
 

On Fri, Aug 7, 2020 at 11:31 AM, Andrii Nakryiko wrote:
You can't do it reliably with kretprobe. kretprobe is executed right
before the function is exiting, by that time all the registers that
contained input parameters could have been used for something else. So
you got lucky with struct sock * here, but as a general rule you
shouldn't rely on this. You either have to pair kprobe with kretprobe
and store input arguments, or take a look at fexit program type, it is
just like kretprobe, but faster and guarantees input arguments are
preserved.
Thanks for reply.
It seems fexit it a new feature and I'm using linux v4.15, so fexit can't help here.
kretprobe with kprobe is an option and I've found a lot examples in bbc, but I am also wondering if it is always right to use pid_tgid as key to store params and get it from kretprobe.
I am wondering if there is a chance that following case would happen:

0. attach kprobe program in tcp_set_state, store params in HASHMAP using pid_tgid as key; attach kretprobe in tcp_set_state, lookup params using pid_tgid
1. kprobe program triggered twice with same pid_tgid before kretprobe executed, so can only get the last params

I have this concern because I'm using golang and the two goroutines may map to one thread in kernel. If one goroutine gets interrupted when executing tcp_set_state, another one would have a chance to execute tcp_set_state with the same pid_tgid.


Re: How to get function param in kretprobe bpf program? #bcc #pragma

Andrii Nakryiko
 

On Fri, Aug 7, 2020 at 12:45 AM <forrest0579@...> wrote:

When using kprobe in bcc, I can get param directly like `int kprobe__tcp_set_state(struct pt_regs *ctx, struct sock *sk, int state)`
But it seems not to work in kretprobe, I've found that I can get first param by using `struct sock *sk = (void*)ctx->bx`
but I can't get the second param through `ctx->cx`.
Am I get the wrong register? I'm in x86-64
You can't do it reliably with kretprobe. kretprobe is executed right
before the function is exiting, by that time all the registers that
contained input parameters could have been used for something else. So
you got lucky with struct sock * here, but as a general rule you
shouldn't rely on this. You either have to pair kprobe with kretprobe
and store input arguments, or take a look at fexit program type, it is
just like kretprobe, but faster and guarantees input arguments are
preserved.


How to get function param in kretprobe bpf program? #bcc #pragma

Forrest Chen
 

When using kprobe in bcc, I can get param directly like `int kprobe__tcp_set_state(struct pt_regs *ctx, struct sock *sk, int state)`
But it seems not to work in kretprobe, I've found that I can get first param by using `struct sock *sk = (void*)ctx->bx`
but I can't get the second param through `ctx->cx`.
Am I get the wrong register? I'm in x86-64


Clang target bpf compile issue/fail on Ubuntu and Debian

Jesper Dangaard Brouer
 

The BPF UAPI header file <linux/bpf.h> includes <linux/types.h>, which gives
BPF-programs access to types e.g. __u32, __u64, __u8, etc.

On Ubuntu/Debian when compiling with clang option[1] "-target bpf" the
compile fails because it cannot find the file <asm/types.h>, which is
included from <linux/types.h>. This is because Ubuntu/Debian tries to
support multiple architectures on a single system[2]. On x86_64 the file
<asm/types.h> is located in /usr/include/x86_64-linux-gnu/, which the distro
compiler will add to it's search path (/usr/include/<triplet> [3]). Note, it
works if not specifying target bpf, but as explained in kernel doc[1] the
clang target bpf should really be used (to avoid other issues).

There are two workarounds: (1) To have an extra include dir on Ubuntu (which
seems too x86 specific) like: CFLAGS += -I/usr/include/x86_64-linux-gnu .
Or (2) install the package gcc-multilib on Ubuntu.

The question is: Should Ubuntu/Debian have a /usr/include/<triplet>
directory for BPF? (as part of their multi-arch approach)

Or should clang use the compile-host's triplet for the /usr/include/triplet
path even when giving clang -target bpf option?

p.s. GCC choose 'bpf-unknown-none' target triplet for BPF.


Links:
[1] https://www.kernel.org/doc/html/latest/bpf/bpf_devel_QA.html#q-clang-flag-for-target-bpf
[2] https://wiki.ubuntu.com/MultiarchSpec
[3] https://wiki.osdev.org/Target_Triplet
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer


Accessing current netns info in a TC eBPF program

siva.gaggara@...
 

Hi,

I am trying to attach the same TC eBPF program instance to both host
and container interfaces. So some of the maps need to be qualified
with the netns id. I was wondering if there is a way to access the
'current' netns info in a TC eBPF program. It would be quite helpful if
you could provide me with some pointers.

Thanks

Siva


Re: Invalid filename/mode in openat tracepoint data

alessandro.gario@...
 

Hello Tristan!

That is the same path I found when debugging with strace! I think I also saw a missing comm string during my tests (with printk from BCC), but I would have to reproduce it again to be sure.

I'm going to test this one more time on kernel 4.18, as I don't remember finding this problem when I started writing the library on Ubuntu 18.10 (and maybe I'll also try to take a look at the openat implementation).

Thanks so much for your help!

Alessandro Gario

On Fri, Jul 24, 2020 at 10:11 am, Tristan Mayfield <mayfieldtristan@...> wrote:
Alessandro,
I figured out that it's non-deterministic. So sometimes certain commands (git, awk, rm, uname, etc.) will have an openat with no filename, but other times they will.
I ran these commands experimentally and got results similar to what I have below for all of them:
$ rm something
sys_enter_openat comm: rm pid:3512 filename:/etc/ld.so.cache (140398792747904)
sys_enter_openat comm: rm pid:3512 filename:/lib/x86_64-linux-gnu/libc.so.6 (140398792789520)
sys_enter_openat comm: rm pid:3512 filename:/usr/lib/locale/locale-archive (140398792339408)
sys_enter_openat comm: rm pid:3514 filename:/etc/ld.so.cache (139648615484288)
sys_enter_openat comm: rm pid:3514 filename:/lib/x86_64-linux-gnu/libc.so.6 (139648615525904)
sys_enter_openat comm: rm pid:3514 filename: (139648615075792)
Because it's been so consistent, I believe the missing file is... always? Most of the time? At least a good part of the time "/usr/lib/locale/locale-archive".
I'm not sure why an archive file would behave differently, but it seems to be causing this issue. You can use the below bpftrace script to figure out which commands most often create the no-name situation.
tracepoint:syscalls:sys_enter_open,
tracepoint:syscalls:sys_enter_openat
{
if( str(args->filename) == "") {
printf("sys_enter_openat comm: %s pid:%d filename:%s (%ld)\n",comm,pid, str(args->filename), args->filename);
}
}
Tristan


Re: Invalid filename/mode in openat tracepoint data

Tristan Mayfield
 

Alessandro,

I figured out that it's non-deterministic. So sometimes certain commands (git, awk, rm, uname, etc.) will have an openat with no filename, but other times they will.
I ran these commands experimentally and got results similar to what I have below for all of them:

$ rm something
sys_enter_openat comm: rm pid:3512 filename:/etc/ld.so.cache (140398792747904)
sys_enter_openat comm: rm pid:3512 filename:/lib/x86_64-linux-gnu/libc.so.6 (140398792789520)
sys_enter_openat comm: rm pid:3512 filename:/usr/lib/locale/locale-archive (140398792339408)

sys_enter_openat comm: rm pid:3514 filename:/etc/ld.so.cache (139648615484288)
sys_enter_openat comm: rm pid:3514 filename:/lib/x86_64-linux-gnu/libc.so.6 (139648615525904)
sys_enter_openat comm: rm pid:3514 filename: (139648615075792)

Because it's been so consistent, I believe the missing file is... always? Most of the time? At least a good part of the time "/usr/lib/locale/locale-archive".
I'm not sure why an archive file would behave differently, but it seems to be causing this issue. You can use the below bpftrace script to figure out which commands most often create the no-name situation.

tracepoint:syscalls:sys_enter_open,
tracepoint:syscalls:sys_enter_openat
{
        if( str(args->filename) == "") {
        printf("sys_enter_openat comm: %s pid:%d filename:%s (%ld)\n",comm,pid, str(args->filename), args->filename);
        }
}

Tristan

121 - 140 of 2015