Re: Polling multiple BPF_MAP_TYPE_PERF_EVENT_ARRAY causing dropped events


Ian
 

If you have the luxury of using Linux kernel 5.8 or newer, you can try
a new BPF ring buffer map, that provides MPSC queue (so you can queue
from multiple CPUs simultaneously, while BPF perf buffer allows you to
only enqueue on your current CPU). But what's more important for you,
libbpf's ring_buffer interface allows you to do exactly what you need:
poll multiple independent ring buffers simultaneously from a single
epoll FD. See [0] for example of using that API in user-space, plus
[1] for corresponding BPF-side code.

But having said that, we should probably extend libbpf's perf_buffer
API to support similar use cases. I'll try to do this some time soon.

[0] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c#L54-L62
[1] https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
Unfortunately my project is currently targeting Ubuntu 20.04 which ships with linux kernel version 5.4. It is a shame because the new ring buffer interface looks excellent! That said, would you still suggest we use the perf functionality? Or is this currently an incorrect usage? (More on possible changes below)
Yes, after your handle_event() callback returns, libbpf marks that
sample as consumed and the space it was taking is now available for
new samples to be enqueued. You are right, though, that by increasing
the size of each per-CPU perf ring buffer, you'll delay the drops,
because now you can accumulate more samples in the ring before the
ring buffer is full.
When you say delay the drops, do you mean that the threshold for dropping events is larger? So if I made my page size 256, would that make it far less likely to receive dropped events all together? What would a suggested page size be? I initially thought 16 seemed like plenty, but I haven't found any research to support this. Will I always lose some events? Because that is the behavior I am witnessing right now. It seems like I always eventually start to lose events. Some of this might be due to a feedback loop where my BPF program that monitors file opens collects events triggered by my user space program. I was thinking about using a BPF map that is written by my user space program containing its PID and having all my BPF programs read that map and not write any corresponding events with matching PIDs. Any advice or thoughts on this would be appreciated!

Some of the event loss might also be attributed to the inefficiencies of my looping mechanism. Although I think the feedback loop might be the bigger culprit. I am thinking about following the Sysdig approach, which is to have a single perf buffer that is used by all my BPF programs (16 in total). This would remove the loop and eliminate all but 1 perf buffer. I would think that would be more efficient because I am removing 15 perf buffers and their epoll_waits. Then I would use a ID member of each passed data structure to properly read the data. 

Join iovisor-dev@lists.iovisor.org to automatically receive all group messages.