Re: [RFC PATCH 00/11] OVS eBPF datapath.


Alexei Starovoitov
 

On Thu, Jun 28, 2018 at 07:19:35AM -0700, William Tu wrote:
Hi Alexei,

Thanks a lot for the feedback!

On Wed, Jun 27, 2018 at 8:00 PM, Alexei Starovoitov
<alexei.starovoitov@...> wrote:
On Sat, Jun 23, 2018 at 05:16:32AM -0700, William Tu wrote:

Discussion
==========
We are still actively working on finishing the feature, currently
the basic forwarding and tunnel feature work, but still under
heavy debugging and development. The purpose of this RFC is to
get some early feedbacks and direction for finishing the complete
features in existing kernel's OVS datapath (the net/openvswitch/*).
Thank you for sharing the patches.

Three major issues we are worried:
a. Megaflow support in BPF.
b. Connection Tracking support in BPF.
my opinion on the above two didn't change.
To recap:
A. Non scalable megaflow map is no go. I'd like to see packet classification
algorithm like hicuts or efficuts to be implemented instead, since it can be
shared by generic bpf, bpftiler, ovs and likely others.
We did try the decision tree approach using dpdk's acl lib. The lookup
speed is 6 times faster than the magaflow using tuple space.
However, the update/insertion requires rebuilding/re-balancing the decision
tree so it's way too slow. I think hicuts or efficuts suffers the same issue.
So decision tree algos are scalable only for lookup operation due to its
optimization over tree depth, but not scalable under
update/insert/delete operations.

On customer's system we see megaflow update/insert rate around 10 rules/sec,
this makes decision tree unusable, unless we invent something to optimize the
update/insert time or incremental update of these decision tree algo.
is this a typo? you probably meant 10K rule updates a second ?
Last time I've dealt with these algorithms we had 100K acl updates a second.
It was an important metric that we were optimizing for.
I'm pretty sure '*cuts' algos do many thousands per second non optimized.

c. Verifier limitation.
Not sure what limitations you're concerned about.
Mostly related to stack. The flow key OVS uses (struct sw_flow_key)
is 464 byte. We trim a lot, now around 300 byte, but still huge, considering
the BPF's stack limit is 512 byte.
have you tried using per-cpu array of one element with large value
instead of stack?
In the latest verifier most of the operations that can be done with the stack
pointer can be done with pointer to map value too.

Join iovisor-dev@lists.iovisor.org to automatically receive all group messages.