Can and should you use very large maps with XDP?


liamkelly17@...
 

Is there anything that can go haywire if an eBPF map gets too big in an XDP program? I am looking into importing a large database of unique IPs (10,000-100,000) into a eBPF hash map in a XDP program to implement an ACL-like behavior. Besides slower performance, is there anything else that can go wrong with a very large map? I have seen a blog post suggesting that locked memory limits might need to be adjusted with ulimit, but even with this modification, can the eBPF program work with a hash map with 500K entries?

-Liam


Yonghong Song
 

On Wed, Jan 2, 2019 at 12:58 PM <liamkelly17@...> wrote:

Is there anything that can go haywire if an eBPF map gets too big in an XDP program? I am looking into importing a large database of unique IPs (10,000-100,000) into a eBPF hash map in a XDP program to implement an ACL-like behavior. Besides slower performance, is there anything else that can go wrong with a very large map? I have seen a blog post suggesting that locked memory limits might need to be adjusted with ulimit, but even with this modification, can the eBPF program work with a hash map with 500K entries?
It should be as long as the total memory is less than 4GB (your table
memory + some overhead for each table entry).


-Liam