Date   

Re: bps helper functions

riya
 

<ping>

On Mon, Apr 2, 2018 at 1:30 PM, riya khanna <riyakhanna1983@...> wrote:
Oh its simply to inspect/filter/drop a compressed application packet in the kernel without incurring network stack processing. 

On Mon, Apr 2, 2018 at 11:36 AM, Y Song <ys114321@...> wrote:
Hi, Ashish,

It seems that this has not been actively discussed before. Could you
describe your use case?

Thanks!

Yonghong

On Mon, Apr 2, 2018 at 8:11 AM, riya khanna via iovisor-dev
<iovisor-dev@...> wrote:
> Dear developers,
>
> Is it a good idea to export kernel compression/decompression (lib/zlib_*,
> lib/lz*) as helper functions to enable corresponding use cases? Has this
> been tried before?
>
> Thanks,
> Ashish
>
> _______________________________________________
> iovisor-dev mailing list
> iovisor-dev@...
> https://lists.iovisor.org/mailman/listinfo/iovisor-dev
>



Best userspace programming API for XDP features query to kernel?

Daniel Borkmann
 

On 04/05/2018 10:51 PM, Jesper Dangaard Brouer wrote:
On Thu, 5 Apr 2018 12:37:19 +0200
Daniel Borkmann <daniel@...> wrote:

On 04/04/2018 02:28 PM, Jesper Dangaard Brouer via iovisor-dev wrote:
Hi Suricata people,

When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
into the issue, that at Suricata load/start time, we cannot determine
if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).

We would have liked a way to report that suricata.yaml config was
invalid for this hardware/setup. Now, it just loads, and packets gets
silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).

My question to suricata developers: (Q1) Do you already have code that
query the kernel or drivers for features?

At the IOvisor call (2 weeks ago), we discussed two options of exposing
XDP features avail in a given driver.

Option#1: Extend existing ethtool -k/-K "offload and other features"
with some XDP features, that userspace can query. (Do you already query
offloads, regarding Q1)

Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.
I don't really mind if you go via ethtool, as long as we handle this
generically from there and e.g. call the dev's ndo_bpf handler such that
we keep all the information in one place. This can be a new ndo_bpf command
e.g. XDP_QUERY_FEATURES or such.
Just to be clear: notice as Victor points out[2], they are programmable
going though the IOCTL (SIOCETHTOOL) and not using cmdline tools.
Sure, that was perfectly clear. (But at the same time if you extend the
ioctl, it's obvious to also add support to actual ethtool cmdline tool.)

[2] https://github.com/OISF/suricata/blob/master/src/util-ioctl.c#L326

If you want everything to go through the drivers ndo_bpf call anyway
(which userspace API is netlink based) then at what point to you
Not really, that's the front end. ndo_bpf itself is a plain netdev op
and has no real tie to netlink.

want drivers to call their own ndo_bpf, when activated though their
ethtool_ops ? (Sorry, but I don't follow the flow you are proposing)

Notice, I'm not directly against using the drivers ndo_bpf call. I can
see it does provide kernel more flexibility than the ethtool IOCTL.
What I was saying is that even if you go via ethtool ioctl api, where
you end up in dev_ethtool() and have some new ETHTOOL_* query command,
then instead of adding a new ethtool_ops callback, we can and should
reuse ndo_bpf from there.

[...]
Here, I want to discuss how drivers expose/tell userspace that they
support a given feature: Specifically a bit for: XDP_REDIRECT action
support.

Same for meta data,
Well, not really. It would be a "nice-to-have", but not strictly
needed as a feature bit. XDP meta-data is controlled via a helper.
And the BPF-prog can detect/see runtime, that the helper bpf_xdp_adjust_meta
returns -ENOTSUPP (and need to check the ret value anyhow). Thus,
there is that not much gained by exposing this to be detected setup
time, as all drivers should eventually support this, and we can detect
it runtime.

The missing XDP_REDIRECT action features bit it different, as the
BPF-prog cannot detect runtime that this is an unsupported action.
Plus, setup time we cannot query the driver for supported XDP actions.
Ok, so with the example of meta data, you're arguing that it's okay
to load a native XDP program onto a driver, and run actual traffic on
the NIC in order probe for the availability of the feature when you're
saying that it "can detect/see [at] runtime". I totally agree with you
that all drivers should eventually support this (same with XDP_REDIRECT),
but today there are even differences in drivers on bpf_xdp_adjust_meta()/
bpf_xdp_adjust_head() with regards to how much headroom they have available,
etc (e.g. some of them have none), so right now you can either go and
read the code or do a runtime test with running actual traffic through
the NIC to check whether your BPF prog is supported or not. Theoretically,
you can do the same runtime test with XDP_REDIRECT (taking the warn in
bpf_warn_invalid_xdp_action() aside for a moment), but you do have the
trace_xdp_exception() tracepoint to figure it out, yes, it's a painful
hassle, but overall, it's not that different as you were trying to argue
here. For /both/ cases it would be nice to know at setup time whether
this would be supported or not. Hence, such query is not just limited to
XDP_REDIRECT alone. Eventually once such interface is agreed upon,
undoubtedly the list of feature bits will grow is what I'm trying to say;
only arguing on the XDP_REDIRECT here would be short term.

[...]
What about keeping this high level to users? E.g. say you have 2 options
that drivers can expose as netdev_features_strings 'xdp-native-full' or
'xdp-native-partial'. If a driver truly supports all XDP features for a
given kernel e.g. v4.16, then a query like 'ethtool -k foo' will say
'xdp-native-full', if at least one feature is missing to be feature complete
from e.g. above list, then ethtool will tell 'xdp-native-partial', and if
not even ndo_bpf callback exists then no 'xdp-native-*' is reported.
I use-to-be, an advocate for this. I even think I send patches
implementing this. Later, I've realized that this model is flawed.

When e.g. suricata loads it need to look at both "xdp-native-full" and
the kernel version, to determine if XDP_REDIRECT action is avail.
Later when a new kernel version gets released, the driver is missing a
new XDP feature. Then suricata, which doesn't use/need the new
feature, need to be updated, to check that kernel below this version,
with 'xdp-native-partial' and this NIC driver is still okay. Can you
see the problem?

Even if Suricate goes though the pain of keeping track of kernel
version vs drivers vs xdp-native-full/partial. Then, they also want to
run their product on distro kernels. They might convince some distro,
to backport some XDP features they need. So, now they also need to
keep track of distro kernel minor versions... and all they really
wanted as a single feature bit saying if the running NIC driver
supports the XDP_REDIRECT action code.
Yep, agree it's not pretty, not claiming any of this is. You kind of
need to be aware of the underlying kernel, similar to the tracing case.
The underlying problem is effectively the decoupling of program verification
that doesn't have/know the context of where it is being attached to in
this case. Thinking out loud for a sec on couple of other options aside
from feature bits, what about i) providing the target ifindex to the
verifier for XDP programs, such that at verification time you have the
full context similar to nfp offload case today, or ii) populating some
XDP specific auxillary data to the BPF program at verification time such
that the driver can check at program attach time whether the requested
features are possible and if not it will reject and respond with netlink
extack message to the user (as we do in various XDP attach cases already
through XDP_SETUP_PROG{,_HW}).

This would, for example, avoid the need for feature bits, and do actual
rejection of the program while retaining flexibility (and avoiding to
expose bits that over time hopefully will deprecate anyway due to all
XDP aware drivers implementing them). For both cases i) and ii), it
would mean we make the verifier a bit smarter with regards to keeping
track of driver related (XDP) requirements. Program return code checking
is already present in the verifier's check_return_code() and we could
extend it for XDP as well, for example. Seems cleaner and more extensible
than feature bits, imho.

Thanks,
Daniel


Re: Best userspace programming API for XDP features query to kernel?

Jesper Dangaard Brouer
 

On Thu, 5 Apr 2018 14:47:16 -0700
Jakub Kicinski <jakub.kicinski@...> wrote:

On Thu, 5 Apr 2018 22:51:33 +0200, Jesper Dangaard Brouer wrote:
What about nfp in terms of XDP
offload capabilities, should they be included as well or is probing to load
the program and see if it loads/JITs as we do today just fine (e.g. you'd
otherwise end up with extra flags on a per BPF helper basis)?
No, flags per BPF helper basis. As I've described above, helper belong
to the BPF core, not the driver. Here I want to know what the specific
driver support.
I think Daniel meant for nfp offload. The offload restrictions are
quite involved, are we going to be able to express those?
Let's keep thing separate.

I'm requesting something really simple. I want the driver to tell me
what XDP actions it supports. We/I can implement an XDP_QUERY_ACTIONS
via ndo_bpf, problem solved. It is small, specific and simple.

For my other use-case of enabling XDP-xmit on a device, I can
implement another ndo_bpf extension. Current approach today is loading
a dummy XDP prog via ndo_bpf anyway (which is awkward). Again a
specific change that let us move one-step further.


For your nfp offload use-case, you/we have to find a completely
different solution. You have hit a design choice made by BPF.
Which is that BPF is part of the core kernel, and helpers cannot be
loaded as kernel modules. As we cannot remove or add helpers after the
verifier certified the program. And basically your nfp offload driver
comes as a kernel module.
(Details: and you basically already solved your issue by modifying the
core verifier to do a call back to bpf_prog_offload_verify_insn()).
Point being this is very different from what I'm requesting. Thus, for
offloading you already have a solution, to my setup time detect
problem, as your program gets rejected setup/load time by the verifier.

--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer


Re: Best userspace programming API for XDP features query to kernel?

Jakub Kicinski
 

On Thu, 5 Apr 2018 22:51:33 +0200, Jesper Dangaard Brouer wrote:
What about nfp in terms of XDP
offload capabilities, should they be included as well or is probing to load
the program and see if it loads/JITs as we do today just fine (e.g. you'd
otherwise end up with extra flags on a per BPF helper basis)?
No, flags per BPF helper basis. As I've described above, helper belong
to the BPF core, not the driver. Here I want to know what the specific
driver support.
I think Daniel meant for nfp offload. The offload restrictions are
quite involved, are we going to be able to express those?

This is a bit simpler but reminds me of the TC flower capability
discussion. Expressing features and capabilities gets messy quickly.

I have a gut feeling that a good starting point would be defining and
building a test suite or a set of probing tests to check things work at
system level (incl. redirects to different ports etc.) I think having
a concrete set of litmus tests that confirm the meaning of a given
feature/capability would go a long way in making people more comfortable
with accepting any form of BPF driver capability. And serious BPF
projects already do probing so it's just centralizing this in the
kernel.

That's my two cents.


Re: [Oisf-devel] Best userspace programming API for XDP features query to kernel?

Jesper Dangaard Brouer
 

On Thu, 5 Apr 2018 09:47:37 +0200
Victor Julien <lists@...> wrote:

Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.
Do you have an example of how this is queried?
The code for querying should not be too difficult.

It would likely be similar to how we currently "set"/attach an XDP
program, via its BPF file-descriptor to an ifindex. Eric Leblond,
choose to hide this in the kernel library "libbpf", see code:

function bpf_set_link_xdp_fd()
https://github.com/torvalds/linux/blob/master/tools/lib/bpf/bpf.c#L456-L575

Given Suricata already depend on libbpf for eBFP and XDP support, then
it might make sense to add an API call to "get" XDP link info, e.g.
bpf_get_link_xdp_features(int ifindex) ?

--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer


Re: Best userspace programming API for XDP features query to kernel?

Jesper Dangaard Brouer
 

On Thu, 5 Apr 2018 12:37:19 +0200
Daniel Borkmann <daniel@...> wrote:

On 04/04/2018 02:28 PM, Jesper Dangaard Brouer via iovisor-dev wrote:
Hi Suricata people,

When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
into the issue, that at Suricata load/start time, we cannot determine
if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).

We would have liked a way to report that suricata.yaml config was
invalid for this hardware/setup. Now, it just loads, and packets gets
silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).

My question to suricata developers: (Q1) Do you already have code that
query the kernel or drivers for features?

At the IOvisor call (2 weeks ago), we discussed two options of exposing
XDP features avail in a given driver.

Option#1: Extend existing ethtool -k/-K "offload and other features"
with some XDP features, that userspace can query. (Do you already query
offloads, regarding Q1)

Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.
I don't really mind if you go via ethtool, as long as we handle this
generically from there and e.g. call the dev's ndo_bpf handler such that
we keep all the information in one place. This can be a new ndo_bpf command
e.g. XDP_QUERY_FEATURES or such.
Just to be clear: notice as Victor points out[2], they are programmable
going though the IOCTL (SIOCETHTOOL) and not using cmdline tools.

[2] https://github.com/OISF/suricata/blob/master/src/util-ioctl.c#L326

If you want everything to go through the drivers ndo_bpf call anyway
(which userspace API is netlink based) then at what point to you
want drivers to call their own ndo_bpf, when activated though their
ethtool_ops ? (Sorry, but I don't follow the flow you are proposing)

Notice, I'm not directly against using the drivers ndo_bpf call. I can
see it does provide kernel more flexibility than the ethtool IOCTL.


More specifically, how would such feature mask look like? How fine grained
would this be? When you add a new minor feature to, say, cpumap that not
all drivers support yet, we'd need a new flag each time, no?
No, CPUMAP is not a driver level feature, and thus does not require a
features flag exposed by the driver. CPUMAP depends on the driver
feature XDP_REDIRECT.

It is important we separate driver-level features and BPF/XDP core
features. I feel that we constantly talk past each-other when we mix
that up.

It is true that Suricata _also_ need to detect if the running kernel
support the map type called: BPF_MAP_TYPE_CPUMAP. *BUT* that is a
completely separate mechanism. It is a core kernel bpf feature, and I
have accepted that this can only be done via probing the kernel (simply
use the bpf syscall and try to create this map type).

Here, I want to discuss how drivers expose/tell userspace that they
support a given feature: Specifically a bit for: XDP_REDIRECT action
support.


Same for meta data,
Well, not really. It would be a "nice-to-have", but not strictly
needed as a feature bit. XDP meta-data is controlled via a helper.
And the BPF-prog can detect/see runtime, that the helper bpf_xdp_adjust_meta
returns -ENOTSUPP (and need to check the ret value anyhow). Thus,
there is that not much gained by exposing this to be detected setup
time, as all drivers should eventually support this, and we can detect
it runtime.

The missing XDP_REDIRECT action features bit it different, as the
BPF-prog cannot detect runtime that this is an unsupported action.
Plus, setup time we cannot query the driver for supported XDP actions.


then potentially for the redirect memory return work,
I'm not sure the redirect memory return types, belong here. First of
all it is per RX-ring. Second, some other config method will likely be
config interface. Like with AF_XDP-zero-copy it is a new NDO. So, it
is more losely coupled.

or the af_xdp bits,
No need for bit for AF_XDP in copy-mode (current RFC), as this only
depend on driver supporting XDP_REDIRECT action.

For AF_XDP in zero-copy mode, then yes we need a way for userspace to
"see" if this mode is supported by the driver. But it might not need a
feature bit here... as the bind() call (which knows the ifindex) could
fail when it tried to enable this ZC mode. It would make userspace's
live easier to add ZC as a driver feature bit.


the xdp_rxq_info would have needed it, etc.
Same comment as mem-type, not necessarily, as it is more losely coupled
to XDP.

What about nfp in terms of XDP
offload capabilities, should they be included as well or is probing to load
the program and see if it loads/JITs as we do today just fine (e.g. you'd
otherwise end up with extra flags on a per BPF helper basis)?
No, flags per BPF helper basis. As I've described above, helper belong
to the BPF core, not the driver. Here I want to know what the specific
driver support.

To make a
somewhat reliable assertion whether feature xyz would work, this would
explode in new feature bits long term. Additionally, if we end up with a
lot of feature flags, it will be very hard for users to determine whether
this particular set of features a driver supports actually represents a
fully supported native XDP driver.
Think about it, what does a "fully supported native XDP driver" mean,
when the kernel evolve and new features gets added. How will the
end-user know what XDP features are in their running kernel release?

What about keeping this high level to users? E.g. say you have 2 options
that drivers can expose as netdev_features_strings 'xdp-native-full' or
'xdp-native-partial'. If a driver truly supports all XDP features for a
given kernel e.g. v4.16, then a query like 'ethtool -k foo' will say
'xdp-native-full', if at least one feature is missing to be feature complete
from e.g. above list, then ethtool will tell 'xdp-native-partial', and if
not even ndo_bpf callback exists then no 'xdp-native-*' is reported.
I use-to-be, an advocate for this. I even think I send patches
implementing this. Later, I've realized that this model is flawed.

When e.g. suricata loads it need to look at both "xdp-native-full" and
the kernel version, to determine if XDP_REDIRECT action is avail.
Later when a new kernel version gets released, the driver is missing a
new XDP feature. Then suricata, which doesn't use/need the new
feature, need to be updated, to check that kernel below this version,
with 'xdp-native-partial' and this NIC driver is still okay. Can you
see the problem?

Even if Suricate goes though the pain of keeping track of kernel
version vs drivers vs xdp-native-full/partial. Then, they also want to
run their product on distro kernels. They might convince some distro,
to backport some XDP features they need. So, now they also need to
keep track of distro kernel minor versions... and all they really
wanted as a single feature bit saying if the running NIC driver
supports the XDP_REDIRECT action code.


Side-effect might be that it would give incentive to keep drivers in
state 'xdp-native-full' instead of being downgraded to
'xdp-native-partial'. Potentially, in the 'xdp-native-partial' state,
we can expose a high-level list of missing features that the driver
does not support yet, which would over time converge towards 'zero'
and thus 'xdp-native-full' again. ethtool itself could get a new XDP
specific query option that, based on this info, can then dump the
full list of supported and not supported features. In order for this
to not explode, such features would need to be kept on a high-level
basis, meaning if e.g. cpumap gets extended along with support for a
number of drivers, then those that missed out would need to be
temporarily re-flagged with e.g. 'cpumap not supported' until it gets
also implemented there. That way, we don't explode in adding too
fine-grained feature bit combinations long term and make it easier to
tell whether a driver supports the full set in native XDP or not.
Thoughts?
(I really liked creating an incentive for driver vendors)
Thoughs inlined above...

(Q2) Do Suricata devs have any preference (or other options/ideas)
for the way the kernel expose this info to userspace?

[1]
http://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html#the-xdp-cpu-redirect-case
Regarding how fine grained features bits I want. I also want to mention
that, I want the drivers XDP_REDIRECT action support to be decoupled
from whether the driver support ndo_xdp_xmit, and if ndo_xdp_xmit is
enabled or disabled.
E.g. for the macvlan driver, I don't see much performance gain in
implementing the native XDP-RX actions "side", while there will be a
huge performance gain in implemeting ndo_xdp_xmit.

--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer


Re: Best userspace programming API for XDP features query to kernel?

Daniel Borkmann
 

On 04/04/2018 02:28 PM, Jesper Dangaard Brouer via iovisor-dev wrote:
Hi Suricata people,

When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
into the issue, that at Suricata load/start time, we cannot determine
if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).

We would have liked a way to report that suricata.yaml config was
invalid for this hardware/setup. Now, it just loads, and packets gets
silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).

My question to suricata developers: (Q1) Do you already have code that
query the kernel or drivers for features?

At the IOvisor call (2 weeks ago), we discussed two options of exposing
XDP features avail in a given driver.

Option#1: Extend existing ethtool -k/-K "offload and other features"
with some XDP features, that userspace can query. (Do you already query
offloads, regarding Q1)

Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.
I don't really mind if you go via ethtool, as long as we handle this
generically from there and e.g. call the dev's ndo_bpf handler such that
we keep all the information in one place. This can be a new ndo_bpf command
e.g. XDP_QUERY_FEATURES or such.

More specifically, how would such feature mask look like? How fine grained
would this be? When you add a new minor feature to, say, cpumap that not
all drivers support yet, we'd need a new flag each time, no? Same for meta data,
then potentially for the redirect memory return work, or the af_xdp bits, the
xdp_rxq_info would have needed it, etc. What about nfp in terms of XDP
offload capabilities, should they be included as well or is probing to load
the program and see if it loads/JITs as we do today just fine (e.g. you'd
otherwise end up with extra flags on a per BPF helper basis)? To make a
somewhat reliable assertion whether feature xyz would work, this would
explode in new feature bits long term. Additionally, if we end up with a
lot of feature flags, it will be very hard for users to determine whether
this particular set of features a driver supports actually represents a
fully supported native XDP driver.

What about keeping this high level to users? E.g. say you have 2 options
that drivers can expose as netdev_features_strings 'xdp-native-full' or
'xdp-native-partial'. If a driver truly supports all XDP features for a
given kernel e.g. v4.16, then a query like 'ethtool -k foo' will say
'xdp-native-full', if at least one feature is missing to be feature complete
from e.g. above list, then ethtool will tell 'xdp-native-partial', and if
not even ndo_bpf callback exists then no 'xdp-native-*' is reported.

Side-effect might be that it would give incentive to keep drivers in state
'xdp-native-full' instead of being downgraded to 'xdp-native-partial'.
Potentially, in the 'xdp-native-partial' state, we can expose a high-level
list of missing features that the driver does not support yet, which would
over time converge towards 'zero' and thus 'xdp-native-full' again. ethtool
itself could get a new XDP specific query option that, based on this info,
can then dump the full list of supported and not supported features. In order
for this to not explode, such features would need to be kept on a high-level
basis, meaning if e.g. cpumap gets extended along with support for a number
of drivers, then those that missed out would need to be temporarily
re-flagged with e.g. 'cpumap not supported' until it gets also implemented
there. That way, we don't explode in adding too fine-grained feature bit
combinations long term and make it easier to tell whether a driver supports
the full set in native XDP or not. Thoughts?

(Q2) Do Suricata devs have any preference (or other options/ideas) for
the way the kernel expose this info to userspace?

[1] http://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html#the-xdp-cpu-redirect-case


Re: [Oisf-devel] Best userspace programming API for XDP features query to kernel?

Peter Manev <petermanev@...>
 

On 5 Apr 2018, at 09:47, Victor Julien <lists@...> wrote:

On 04-04-18 14:28, Jesper Dangaard Brouer wrote:
Hi Suricata people,

When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
into the issue, that at Suricata load/start time, we cannot determine
if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).

We would have liked a way to report that suricata.yaml config was
invalid for this hardware/setup. Now, it just loads, and packets gets
silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).

My question to suricata developers: (Q1) Do you already have code that
query the kernel or drivers for features?


At the IOvisor call (2 weeks ago), we discussed two options of exposing
XDP features avail in a given driver.

Option#1: Extend existing ethtool -k/-K "offload and other features"
with some XDP features, that userspace can query. (Do you already query
offloads, regarding Q1)
I think if it would use the ioctl ETHTOOL interface it'd be easiest for
us, as we already have code for this in place to check for offloading
settings. See [1].


Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.
Do you have an example of how this is queried?


(Q2) Do Suricata devs have any preference (or other options/ideas) for
the way the kernel expose this info to userspace?
Right now I think extending the ethtool logic is best for us.
+1
I would prefer that approach too.



[1] https://github.com/OISF/suricata/blob/master/src/util-ioctl.c#L326

--
---------------------------------------------
Victor Julien
http://www.inliniac.net/
PGP: http://www.inliniac.net/victorjulien.asc
---------------------------------------------


Re: [Oisf-devel] Best userspace programming API for XDP features query to kernel?

Victor Julien <lists@...>
 

On 04-04-18 14:28, Jesper Dangaard Brouer wrote:
Hi Suricata people,

When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
into the issue, that at Suricata load/start time, we cannot determine
if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).

We would have liked a way to report that suricata.yaml config was
invalid for this hardware/setup. Now, it just loads, and packets gets
silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).

My question to suricata developers: (Q1) Do you already have code that
query the kernel or drivers for features?


At the IOvisor call (2 weeks ago), we discussed two options of exposing
XDP features avail in a given driver.

Option#1: Extend existing ethtool -k/-K "offload and other features"
with some XDP features, that userspace can query. (Do you already query
offloads, regarding Q1)
I think if it would use the ioctl ETHTOOL interface it'd be easiest for
us, as we already have code for this in place to check for offloading
settings. See [1].


Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.
Do you have an example of how this is queried?


(Q2) Do Suricata devs have any preference (or other options/ideas) for
the way the kernel expose this info to userspace?
Right now I think extending the ethtool logic is best for us.


[1] https://github.com/OISF/suricata/blob/master/src/util-ioctl.c#L326

--
---------------------------------------------
Victor Julien
http://www.inliniac.net/
PGP: http://www.inliniac.net/victorjulien.asc
---------------------------------------------


Re: [Oisf-devel] Best userspace programming API for XDP features query to kernel?

Michał Purzyński <michalpurzynski1@...>
 

Extending the ethtools mechanism seems like a clean solution here. It is, by design, a 50% reporting tool and the XDP feature set would be just yet another feature here.

On Apr 4, 2018, at 5:28 AM, Jesper Dangaard Brouer <brouer@...> wrote:

Hi Suricata people,

When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
into the issue, that at Suricata load/start time, we cannot determine
if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).

We would have liked a way to report that suricata.yaml config was
invalid for this hardware/setup. Now, it just loads, and packets gets
silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).

My question to suricata developers: (Q1) Do you already have code that
query the kernel or drivers for features?


At the IOvisor call (2 weeks ago), we discussed two options of exposing
XDP features avail in a given driver.

Option#1: Extend existing ethtool -k/-K "offload and other features"
with some XDP features, that userspace can query. (Do you already query
offloads, regarding Q1)

Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.

(Q2) Do Suricata devs have any preference (or other options/ideas) for
the way the kernel expose this info to userspace?



[1] http://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html#the-xdp-cpu-redirect-case
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
_______________________________________________
Suricata IDS Devel mailing list: oisf-devel@...
Site: http://suricata-ids.org | Participate: http://suricata-ids.org/participate/
List: https://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
Redmine: https://redmine.openinfosecfoundation.org/


minutes: IO VIsor TSC/Dev Meeting

Brenden Blanco
 

Hi All,

Thanks for attending the meeting today. Here are my notes from the call.

Cheers,
Brenden

=== Status ===

Yonghong
- Debugging incorrect user stack
- fastpath vs slowpath register difference
- should be updated in 4.9+ (LTS flavors)
- user stack id - doesn't have full access, trying to improve
- bpf perf event introspection

Daniel
cgroup bind and connect patches merged
- 3 new hooks for container management - used by fb
sockmap ingress support added
raw tracepoint
- use all tp arguments directly from bpf
some prep patches for upcoming features
- btf, few others
clang compiled kernel issue with bpffs
- c lang constant merging flag turned off

Joe
sock lookup progress
- improvements to verifier for tracking reference scope
- next rfc soon

John
Some patches merged, some just missed this window
- next support for lookup by hash
Adding Cilium support for sockmap soon
Rearranging selftests/examples for sockmap

Jakub
adding support for offloading of a few features
- inline map helpers
- atomic add
- some performance improvements
Next window - perf ring output support
Working on 32 bit register support in llvm

Martin
BTF support V2
- pretty print from kernel
- includes pahole converter
next: LLVM directly generate BTF with -g

=== Attendees ===
Dan Siemon
Brenden Blanco
Alex Reece
Alexander Duyck
Daniel Borkmann
Jakub Kicinski
Joe Stringer
Mauricio Vasquez
Yonghong Song
Martin
Quentin Monnet
Tom
George Wilson
Andy Gospodarek
Jiong Wang
Saeed
Francois
Joel
jcanseco


Best userspace programming API for XDP features query to kernel?

Jesper Dangaard Brouer
 

Hi Suricata people,

When Eric Leblond (and I helped) integrated XDP in Suricata, we ran
into the issue, that at Suricata load/start time, we cannot determine
if the chosen XDP config options, like xdp-cpu-redirect[1], is valid on
this HW (e.g require driver XDP_REDIRECT support and bpf cpumap).

We would have liked a way to report that suricata.yaml config was
invalid for this hardware/setup. Now, it just loads, and packets gets
silently dropped by XDP (well a WARN_ONCE and catchable via tracepoints).

My question to suricata developers: (Q1) Do you already have code that
query the kernel or drivers for features?


At the IOvisor call (2 weeks ago), we discussed two options of exposing
XDP features avail in a given driver.

Option#1: Extend existing ethtool -k/-K "offload and other features"
with some XDP features, that userspace can query. (Do you already query
offloads, regarding Q1)

Option#2: Invent a new 'ip link set xdp' netlink msg with a query option.

(Q2) Do Suricata devs have any preference (or other options/ideas) for
the way the kernel expose this info to userspace?



[1] http://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html#the-xdp-cpu-redirect-case
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer


reminder: IO Visor TSC/Dev Meeting

Brenden Blanco
 

Please join us tomorrow for our bi-weekly call. As usual, this meeting is
open to everybody and completely optional.
You might be interested to join if:
You want to know what is going on in BPF land
You are doing something interesting yourself with BPF and would like to share
You want to know what the heck BPF is

=== IO Visor Dev/TSC Meeting ===

Every 2 weeks on Wednesday, from Wednesday, January 25, 2017, to no end date
11:00 am | Pacific Daylight Time (San Francisco, GMT-07:00) | 30 min

https://bluejeans.com/568677804/

https://www.timeanddate.com/worldclock/meetingdetails.html?year=2018&month=4&day=4&hour=18&min=0&sec=0&p1=900


Re: bps helper functions

riya
 

Oh its simply to inspect/filter/drop a compressed application packet in the kernel without incurring network stack processing. 

On Mon, Apr 2, 2018 at 11:36 AM, Y Song <ys114321@...> wrote:
Hi, Ashish,

It seems that this has not been actively discussed before. Could you
describe your use case?

Thanks!

Yonghong

On Mon, Apr 2, 2018 at 8:11 AM, riya khanna via iovisor-dev
<iovisor-dev@...> wrote:
> Dear developers,
>
> Is it a good idea to export kernel compression/decompression (lib/zlib_*,
> lib/lz*) as helper functions to enable corresponding use cases? Has this
> been tried before?
>
> Thanks,
> Ashish
>
> _______________________________________________
> iovisor-dev mailing list
> iovisor-dev@...
> https://lists.iovisor.org/mailman/listinfo/iovisor-dev
>


Re: bps helper functions

Yonghong Song
 

Hi, Ashish,

It seems that this has not been actively discussed before. Could you
describe your use case?

Thanks!

Yonghong

On Mon, Apr 2, 2018 at 8:11 AM, riya khanna via iovisor-dev
<iovisor-dev@...> wrote:
Dear developers,

Is it a good idea to export kernel compression/decompression (lib/zlib_*,
lib/lz*) as helper functions to enable corresponding use cases? Has this
been tried before?

Thanks,
Ashish

_______________________________________________
iovisor-dev mailing list
iovisor-dev@...
https://lists.iovisor.org/mailman/listinfo/iovisor-dev


bps helper functions

riya
 

Dear developers,

Is it a good idea to export kernel compression/decompression (lib/zlib_*, lib/lz*) as helper functions to enable corresponding use cases? Has this been tried before?

Thanks,
Ashish


Re: trace_printk

Quentin Monnet
 

Untrue. All these options can be used (and are used by big companies)
with production kernels. I believe many distributions enable them by
default.

Quentin


2018-03-29 14:34 UTC+0000 ~ Saran Kumar Krishnan <sarankumar.k@...>

Thanks for the clarification.


I assumed enabling the options specified in https://github.com/iovisor/bcc/blob/master/INSTALL.md

rendered the kernel as DEBUG which is unsafe for production use.


Is my assumption untrue?


Regards

Saran

________________________________
From: Quentin Monnet <quentin.monnet@...>
Sent: Thursday, March 29, 2018 9:49 AM
To: Saran Kumar Krishnan
Cc: Y Song; iovisor-dev@...
Subject: Re: [iovisor-dev] trace_printk

Just to make it clear, Yonghong means that *the helper function
bpf_trace_printk()* should be used mostly for debug. eBPF itself can
perfectly be used in production.

If you needed to stream data from eBPF programs to user space in
production applications, the way to go would be to use perf maps and
related helpers. This would provide better performances than
bpf_trace_printk().

Best regards,
Quentin


2018-03-28 13:14 UTC-0700 ~ Y Song via iovisor-dev
<iovisor-dev@...>
No, this can still be used. The warning just tells you this should
mostly be used in debug mode.

On Wed, Mar 28, 2018 at 11:59 AM, Saran Kumar Krishnan via iovisor-dev
<iovisor-dev@...> wrote:
Hi -


When I use bpf_trace_printk - I am getting this NOTICE. Does it mean, that
eBPF shouldn't be used in the production kernel.



Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467518]
**********************************************************
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467518] ** NOTICE NOTICE NOTICE
NOTICE NOTICE NOTICE NOTICE **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467518] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467519] ** trace_printk() being
used. Allocating extra memory. **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467519] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467520] ** This means that this
is a DEBUG kernel and it is **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467520] ** unsafe for production
use. **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467520] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467521] ** If you see this
message and you are not debugging **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467522] ** the kernel, report
this immediately to your vendor! **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467522] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467522] ** NOTICE NOTICE NOTICE
NOTICE NOTICE NOTICE NOTICE **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467523]
**********************************************************



Regards
Saran


Re: trace_printk

SARANKUMAR KRISHNAN
 

Thanks for the clarification. 


I assumed enabling the options specified in https://github.com/iovisor/bcc/blob/master/INSTALL.md  

rendered the kernel as DEBUG which is unsafe for production use. 


Is my assumption untrue?


Regards

Saran 


From: Quentin Monnet <quentin.monnet@...>
Sent: Thursday, March 29, 2018 9:49 AM
To: Saran Kumar Krishnan
Cc: Y Song; iovisor-dev@...
Subject: Re: [iovisor-dev] trace_printk
 
Just to make it clear, Yonghong means that *the helper function
bpf_trace_printk()* should be used mostly for debug. eBPF itself can
perfectly be used in production.

If you needed to stream data from eBPF programs to user space in
production applications, the way to go would be to use perf maps and
related helpers. This would provide better performances than
bpf_trace_printk().

Best regards,
Quentin


2018-03-28 13:14 UTC-0700 ~ Y Song via iovisor-dev
<iovisor-dev@...>
> No, this can still be used. The warning just tells you this should
> mostly be used in debug mode.
>
> On Wed, Mar 28, 2018 at 11:59 AM, Saran Kumar Krishnan via iovisor-dev
> <iovisor-dev@...> wrote:
>> Hi -
>>
>>
>> When I use bpf_trace_printk - I am getting this NOTICE. Does it mean, that
>> eBPF shouldn't be used in the production kernel.
>>
>>
>>
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467518]
>> **********************************************************
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467518] **   NOTICE NOTICE NOTICE
>> NOTICE NOTICE NOTICE NOTICE   **
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467518] **
>> **
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467519] ** trace_printk() being
>> used. Allocating extra memory.  **
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467519] **
>> **
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467520] ** This means that this
>> is a DEBUG kernel and it is     **
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467520] ** unsafe for production
>> use.                           **
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467520] **
>> **
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467521] ** If you see this
>> message and you are not debugging    **
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467522] ** the kernel, report
>> this immediately to your vendor!  **
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467522] **
>> **
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467522] **   NOTICE NOTICE NOTICE
>> NOTICE NOTICE NOTICE NOTICE   **
>> Mar 20 12:26:44 lenovo-e565 kernel: [  267.467523]
>> **********************************************************
>>
>>
>>
>> Regards
>> Saran


Re: trace_printk

Quentin Monnet
 

Just to make it clear, Yonghong means that *the helper function
bpf_trace_printk()* should be used mostly for debug. eBPF itself can
perfectly be used in production.

If you needed to stream data from eBPF programs to user space in
production applications, the way to go would be to use perf maps and
related helpers. This would provide better performances than
bpf_trace_printk().

Best regards,
Quentin


2018-03-28 13:14 UTC-0700 ~ Y Song via iovisor-dev
<iovisor-dev@...>

No, this can still be used. The warning just tells you this should
mostly be used in debug mode.

On Wed, Mar 28, 2018 at 11:59 AM, Saran Kumar Krishnan via iovisor-dev
<iovisor-dev@...> wrote:
Hi -


When I use bpf_trace_printk - I am getting this NOTICE. Does it mean, that
eBPF shouldn't be used in the production kernel.



Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467518]
**********************************************************
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467518] ** NOTICE NOTICE NOTICE
NOTICE NOTICE NOTICE NOTICE **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467518] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467519] ** trace_printk() being
used. Allocating extra memory. **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467519] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467520] ** This means that this
is a DEBUG kernel and it is **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467520] ** unsafe for production
use. **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467520] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467521] ** If you see this
message and you are not debugging **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467522] ** the kernel, report
this immediately to your vendor! **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467522] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467522] ** NOTICE NOTICE NOTICE
NOTICE NOTICE NOTICE NOTICE **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467523]
**********************************************************



Regards
Saran


Re: trace_printk

Yonghong Song
 

No, this can still be used. The warning just tells you this should
mostly be used in debug mode.

On Wed, Mar 28, 2018 at 11:59 AM, Saran Kumar Krishnan via iovisor-dev
<iovisor-dev@...> wrote:
Hi -


When I use bpf_trace_printk - I am getting this NOTICE. Does it mean, that
eBPF shouldn't be used in the production kernel.



Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467518]
**********************************************************
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467518] ** NOTICE NOTICE NOTICE
NOTICE NOTICE NOTICE NOTICE **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467518] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467519] ** trace_printk() being
used. Allocating extra memory. **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467519] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467520] ** This means that this
is a DEBUG kernel and it is **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467520] ** unsafe for production
use. **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467520] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467521] ** If you see this
message and you are not debugging **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467522] ** the kernel, report
this immediately to your vendor! **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467522] **
**
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467522] ** NOTICE NOTICE NOTICE
NOTICE NOTICE NOTICE NOTICE **
Mar 20 12:26:44 lenovo-e565 kernel: [ 267.467523]
**********************************************************



Regards
Saran



_______________________________________________
iovisor-dev mailing list
iovisor-dev@...
https://lists.iovisor.org/mailman/listinfo/iovisor-dev

741 - 760 of 2020