Date   

Re: Plans for libbpf packaging for distros?

Alexei Starovoitov
 

On Thu, Nov 08, 2018 at 11:51:22AM -0800, Laura Abbott wrote:
On 11/8/18 7:48 AM, Jakub Kicinski wrote:
On Thu, 8 Nov 2018 14:37:54 +0100, Jesper Dangaard Brouer wrote:
Hi Jakub, Laura and Jiri Olsa (Cc others),

Subj: iovisor-dev] minutes: IO Visor TSC/Dev Meeting
(To: iovisor-dev <iovisor-dev@...>)
On Wed, 31 Oct 2018 14:30:25 -0700 "Brenden Blanco" <bblanco@...> wrote:
Jakub:
* working on getting libbpf packaged separately and released by distros
* FB has external mirror github.com/libbpf/libbpf
I noticed from the iovisor-dev minutes that you have plans for
packaging libbpf from the kernel tree. And via that I noticed the
github repo https://github.com/libbpf/libbpf, created by Yonghong Song.

I'm uncertain if it makes sense to maintain this library outside the
kernel git tree?!?
To my understanding it's useful in two ways:
- some less fortunate distros (Debian) reportedly need a kernel build
to package bpftool, and for libbpf the same would have to happen.
At least that's what I'm told. So separate repo helps there a lot;
- we actually use the separate git repo as a submodule in our projects
(https://github.com/Netronome/bpf-samples will migrate there really
soon, just finishing code review).

So for us the git submodule thing works quite well until distros
package libbpf :)
To be honest, I have very little knowledge about building RPMs and
other packages formats. I just wanted to point out that RHEL and
Fedora is now shipping bpftool, which also part of kernel git tree.

(Now I need input from Jiri Olsa and Laura to correct below statements:)

AFAIK bpftool RPM-package[1] is part of the "Source Package"
kernel-tools, which AFAIK gets build directly from the distro kernel
git tree via kernel.spec file. This also happens for perf
RPM-package[2] see section "Source Package" also point to kernel-tools.

So, my question is, can we ship/package libbpf in the same way?


Notice, that an increasing number of tools are linking/using libbpf,
e.g. perf, bpftool, Suricata, (selftests and samples/bpf).


[1] https://fedora.pkgs.org/28/fedora-x86_64/bpftool-4.16.0-1.fc28.x86_64.rpm.html
[2] https://fedora.pkgs.org/29/fedora-x86_64/perf-4.18.10-300.fc29.x86_64.rpm.html
We were planning to do the same thing for libbpf. Let me copy paste the
patch to the package:

Add libbpf to kernel tools development libs. This library contains
functionality for loading and managing eBPF programs.

Signed-off-by: David Beckett <david.beckett@...>
---
kernel-tools.spec | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/kernel-tools.spec b/kernel-tools.spec
index 44d29ba..cf2f7a0 100644
--- a/kernel-tools.spec
+++ b/kernel-tools.spec
@@ -261,6 +261,9 @@ popd
pushd tools/gpio/
make
popd
+pushd tools/lib/bpf
+make
+popd
pushd tools/bpf/bpftool
make
popd
@@ -347,6 +350,9 @@ popd
pushd tools/kvm/kvm_stat
make INSTALL_ROOT=%{buildroot} install-tools
popd
+pushd tools/lib/bpf
+make DESTDIR=%{buildroot} prefix=%{_prefix} install install_headers
+popd
pushd tools/bpf/bpftool
make DESTDIR=%{buildroot} prefix=%{_prefix} bash_compdir=%{_sysconfdir}/bash_completion.d/ mandir=%{_mandir} install doc-install
popd
@@ -420,8 +426,13 @@ popd
%files -n kernel-tools-libs-devel
%{_libdir}/libcpupower.so
+%{_libdir}/libbpf.a
+%{_libdir}/libbpf.so
%{_includedir}/cpufreq.h
%{_includedir}/cpuidle.h
+%{_includedir}/bpf/bpf.h
+%{_includedir}/bpf/btf.h
+%{_includedir}/bpf/libbpf.h
%files -n bpftool
%{_sbindir}/bpftool

Fairly trivial patch, but since we learnt about the separate repo we
are migrating our internal projects and tests to that, then Alexei
suggested we need to add proper versioning to libbpf, and when all that
is done we can come back to packaging.
Yes, this looks almost exactly like what I would expect to package
libpf. Fedora split out kernel-tools from the main kernel package
to reduce build dependencies (if something in kernel-tools breaks
it was blocking the build of the main kernel). I'm in favor of
continuing to move userspace tools out of the main kernel tree
for ease of packaging.
I think we're all on the same page.
Just to reiterate few points:
The source of truth of libbpf is in the kernel tree.
github/libbpf is a mirror of kernel sources plus few headers that
neceesary for the build.
Currently we don't have the mirror automated,
but the plan is to pull from the kernel tree continuously.
.spec files can live in github too,
and packages can be built out of it.

When we were talking to distor folks they suggested to make sure
that libbpf.SO have versioned symbols from day one.
As far as I know that's the only thing blocking creation of
official libbpf package.


Re: Plans for libbpf packaging for distros?

Jakub Kicinski
 

On Thu, 8 Nov 2018 14:37:54 +0100, Jesper Dangaard Brouer wrote:
Hi Jakub, Laura and Jiri Olsa (Cc others),

Subj: iovisor-dev] minutes: IO Visor TSC/Dev Meeting
(To: iovisor-dev <iovisor-dev@...>)
On Wed, 31 Oct 2018 14:30:25 -0700 "Brenden Blanco" <bblanco@...> wrote:
Jakub:
* working on getting libbpf packaged separately and released by distros
* FB has external mirror github.com/libbpf/libbpf
I noticed from the iovisor-dev minutes that you have plans for
packaging libbpf from the kernel tree. And via that I noticed the
github repo https://github.com/libbpf/libbpf, created by Yonghong Song.

I'm uncertain if it makes sense to maintain this library outside the
kernel git tree?!?
To my understanding it's useful in two ways:
- some less fortunate distros (Debian) reportedly need a kernel build
to package bpftool, and for libbpf the same would have to happen.
At least that's what I'm told. So separate repo helps there a lot;
- we actually use the separate git repo as a submodule in our projects
(https://github.com/Netronome/bpf-samples will migrate there really
soon, just finishing code review).

So for us the git submodule thing works quite well until distros
package libbpf :)

To be honest, I have very little knowledge about building RPMs and
other packages formats. I just wanted to point out that RHEL and
Fedora is now shipping bpftool, which also part of kernel git tree.

(Now I need input from Jiri Olsa and Laura to correct below statements:)

AFAIK bpftool RPM-package[1] is part of the "Source Package"
kernel-tools, which AFAIK gets build directly from the distro kernel
git tree via kernel.spec file. This also happens for perf
RPM-package[2] see section "Source Package" also point to kernel-tools.

So, my question is, can we ship/package libbpf in the same way?


Notice, that an increasing number of tools are linking/using libbpf,
e.g. perf, bpftool, Suricata, (selftests and samples/bpf).


[1] https://fedora.pkgs.org/28/fedora-x86_64/bpftool-4.16.0-1.fc28.x86_64.rpm.html
[2] https://fedora.pkgs.org/29/fedora-x86_64/perf-4.18.10-300.fc29.x86_64.rpm.html
We were planning to do the same thing for libbpf. Let me copy paste the
patch to the package:

Add libbpf to kernel tools development libs. This library contains
functionality for loading and managing eBPF programs.

Signed-off-by: David Beckett <david.beckett@...>
---
kernel-tools.spec | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/kernel-tools.spec b/kernel-tools.spec
index 44d29ba..cf2f7a0 100644
--- a/kernel-tools.spec
+++ b/kernel-tools.spec
@@ -261,6 +261,9 @@ popd
pushd tools/gpio/
make
popd
+pushd tools/lib/bpf
+make
+popd
pushd tools/bpf/bpftool
make
popd
@@ -347,6 +350,9 @@ popd
pushd tools/kvm/kvm_stat
make INSTALL_ROOT=%{buildroot} install-tools
popd
+pushd tools/lib/bpf
+make DESTDIR=%{buildroot} prefix=%{_prefix} install install_headers
+popd
pushd tools/bpf/bpftool
make DESTDIR=%{buildroot} prefix=%{_prefix} bash_compdir=%{_sysconfdir}/bash_completion.d/ mandir=%{_mandir} install doc-install
popd
@@ -420,8 +426,13 @@ popd

%files -n kernel-tools-libs-devel
%{_libdir}/libcpupower.so
+%{_libdir}/libbpf.a
+%{_libdir}/libbpf.so
%{_includedir}/cpufreq.h
%{_includedir}/cpuidle.h
+%{_includedir}/bpf/bpf.h
+%{_includedir}/bpf/btf.h
+%{_includedir}/bpf/libbpf.h

%files -n bpftool
%{_sbindir}/bpftool

Fairly trivial patch, but since we learnt about the separate repo we
are migrating our internal projects and tests to that, then Alexei
suggested we need to add proper versioning to libbpf, and when all that
is done we can come back to packaging.


Re: Plans for libbpf packaging for distros?

Jesper Dangaard Brouer
 

On Thu, 8 Nov 2018 12:37:06 -0200
Arnaldo Carvalho de Melo <acme@...> wrote:

Em Thu, Nov 08, 2018 at 02:37:54PM +0100, Jesper Dangaard Brouer escreveu:
Hi Jakub, Laura and Jiri Olsa (Cc others),

Subj: iovisor-dev] minutes: IO Visor TSC/Dev Meeting
(To: iovisor-dev <iovisor-dev@...>)
On Wed, 31 Oct 2018 14:30:25 -0700 "Brenden Blanco" <bblanco@...> wrote:
Jakub:
* working on getting libbpf packaged separately and released by distros
* FB has external mirror github.com/libbpf/libbpf
I noticed from the iovisor-dev minutes that you have plans for
packaging libbpf from the kernel tree. And via that I noticed the
github repo https://github.com/libbpf/libbpf, created by Yonghong Song.

I'm uncertain if it makes sense to maintain this library outside the
kernel git tree?!?
To ease access to the latest perf sources, we'll be making available
detached tarballs:

[acme@jouet linux]$ make help | grep perf
perf-tar-src-pkg - Build perf-4.20.0-rc1.tar source tarball
perf-targz-src-pkg - Build perf-4.20.0-rc1.tar.gz source tarball
perf-tarbz2-src-pkg - Build perf-4.20.0-rc1.tar.bz2 source tarball
perf-tarxz-src-pkg - Build perf-4.20.0-rc1.tar.xz source tarball
[acme@jouet linux]$

After each kernel release, we started with 4.19:

https://www.kernel.org/pub/linux/kernel/tools/perf/
And you basically also ship a version of libbpf in this tarball:

$ tar tvf ~/download/perf-4.19.0.tar.xz | grep lib/bpf
drwxrwxr-x root/root 0 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/
-rw-rw-r-- root/root 37 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/.gitignore
-rw-rw-r-- root/root 69 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/Build
-rw-rw-r-- root/root 6457 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/Makefile
-rw-rw-r-- root/root 16456 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/bpf.c
-rw-rw-r-- root/root 4440 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/bpf.h
-rw-rw-r-- root/root 7897 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/btf.c
-rw-rw-r-- root/root 775 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/btf.h
-rw-rw-r-- root/root 56905 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/libbpf.c
-rw-rw-r-- root/root 10683 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/libbpf.h
-rw-rw-r-- root/root 2380 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/libbpf_errno.c
-rw-rw-r-- root/root 4483 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/nlattr.c
-rw-rw-r-- root/root 1825 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/nlattr.h
-rw-rw-r-- root/root 479 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/str_error.c
-rw-rw-r-- root/root 152 2018-10-22 16:39 perf-4.19.0/tools/lib/bpf/str_error.h

Which gets compiled to libbpf.a and statically linked with perf.

Development continues in the kernel git tree, of course, and there
you'll be able to, using those top level kernel perf-tar-* targets to
get the bleeding edge, while those tarballs on
https://www.kernel.org/pub/linux/kernel/tools/perf/
help people wanting to try the latest released with older kernels, or to
test one previous release with a more recent kernel, to rule out
problems with some specific perf version.

Konstantin, the kernel.org admin accepted my suggestion for such a
directory name so that we could, in the future, perhaps have the other
tools/ living libraries and tools to follow this model, i.e. we would
have:

https://www.kernel.org/pub/linux/kernel/tools/lib/bpf/

etc.

- Arnaldo

To be honest, I have very little knowledge about building RPMs and
other packages formats. I just wanted to point out that RHEL and
Fedora is now shipping bpftool, which also part of kernel git tree.

(Now I need input from Jiri Olsa and Laura to correct below statements:)

AFAIK bpftool RPM-package[1] is part of the "Source Package"
kernel-tools, which AFAIK gets build directly from the distro kernel
git tree via kernel.spec file. This also happens for perf
RPM-package[2] see section "Source Package" also point to kernel-tools.

So, my question is, can we ship/package libbpf in the same way?


Notice, that an increasing number of tools are linking/using libbpf,
e.g. perf, bpftool, Suricata, (selftests and samples/bpf).


[1] https://fedora.pkgs.org/28/fedora-x86_64/bpftool-4.16.0-1.fc28.x86_64.rpm.html
[2] https://fedora.pkgs.org/29/fedora-x86_64/perf-4.18.10-300.fc29.x86_64.rpm.html
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer


[RFC/PATCH 2/2] tools: Add sofdsnoop to spy on fds passed through socket

Jiri Olsa
 

The sofdsnoop traces FDs passed through unix sockets.

# ./sofdsnoop
ACTION TID COMM SOCKET FD NAME
SEND 2576 Web Content 24:socket:[39763] 51 /dev/shm/org.mozilla.ipc.2576.23874
RECV 2576 Web Content 49:socket:[809997] 51
SEND 2576 Web Content 24:socket:[39763] 58 N/A
RECV 2464 Gecko_IOThread 75:socket:[39753] 55

Every file descriptor that is passed via unix sockets os displayed
on separate line together with process info (TID/COMM columns),
ACTION details (SEND/RECV), file descriptor number (FD) and its
translation to file if available (NAME).

examples:
./sofdsnoop # trace file descriptors passes
./sofdsnoop -T # include timestamps
./sofdsnoop -p 181 # only trace PID 181
./sofdsnoop -t 123 # only trace TID 123
./sofdsnoop -d 10 # trace for 10 seconds only
./sofdsnoop -n main # only print process names containing "main"
---
README.md | 1 +
man/man8/spfdsnoop.8 | 85 ++++++++
snapcraft/snapcraft.yaml | 3 +
tests/python/test_tools_smoke.py | 4 +
tools/sofdsnoop.py | 355 +++++++++++++++++++++++++++++++
tools/sofdsnoop_example.txt | 69 ++++++
6 files changed, 517 insertions(+)
create mode 100644 man/man8/spfdsnoop.8
create mode 100755 tools/sofdsnoop.py
create mode 100644 tools/sofdsnoop_example.txt

diff --git a/README.md b/README.md
index 92b90e5d7c4c..54733ebb58d1 100644
--- a/README.md
+++ b/README.md
@@ -133,6 +133,7 @@ pair of .c and .py files, and some are directories of files.
- tools/[runqlen](tools/runqlen.py): Run queue length as a histogram. [Examples](tools/runqlen_example.txt).
- tools/[runqslower](tools/runqslower.py): Trace long process scheduling delays. [Examples](tools/runqslower_example.txt).
- tools/[shmsnoop](tools/shmsnoop.py): Trace System V shared memory syscalls. [Examples](tools/shmsnoop_example.txt).
+- tools/[sofdsnoop](tools/sofdsnoop.py): Trace FDs passed through unix sockets. [Examples](tools/sofdsnoop_example.txt).
- tools/[slabratetop](tools/slabratetop.py): Kernel SLAB/SLUB memory cache allocation rate top. [Examples](tools/slabratetop_example.txt).
- tools/[softirqs](tools/softirqs.py): Measure soft IRQ (soft interrupt) event time. [Examples](tools/softirqs_example.txt).
- tools/[solisten](tools/solisten.py): Trace TCP socket listen. [Examples](tools/solisten_example.txt).
diff --git a/man/man8/spfdsnoop.8 b/man/man8/spfdsnoop.8
new file mode 100644
index 000000000000..ffad57c591c1
--- /dev/null
+++ b/man/man8/spfdsnoop.8
@@ -0,0 +1,85 @@
+.TH sofdsnoop 8 "2018-11-08" "USER COMMANDS"
+.SH NAME
+sofdsnoop \- Trace FDs passed through unix sockets. Uses Linux eBPF/bcc.
+.SH SYNOPSIS
+.B sofdsnoop [-h] [-T] [-p PID] [-t TID] [-n NAME] [-d DURATION]
+.SH DESCRIPTION
+sofdsnoop traces FDs passed through unix sockets
+
+Every file descriptor that is passed via unix sockets os displayed
+on separate line together with process info (TID/COMM columns),
+ACTION details (SEND/RECV), file descriptor number (FD) and its
+translation to file if available (NAME).
+
+Since this uses BPF, only the root user can use this tool.
+.SH REQUIREMENTS
+CONFIG_BPF and bcc.
+.SH OPTIONS
+.TP
+\-h
+Print usage message.
+.TP
+\-T
+Include a timestamp column.
+.TP
+\-p PID
+Trace this process ID only (filtered in-kernel).
+.TP
+\-t TID
+Trace this thread ID only (filtered in-kernel).
+.TP
+\-d DURATION
+Total duration of trace in seconds.
+.TP
+\-n NAME
+Only print command lines matching this command name (regex)
+.SH EXAMPLES
+.TP
+Trace all sockets:
+#
+.B sofdsnoop
+.TP
+Trace all sockets, and include timestamps:
+#
+.B sofdsnoop \-T
+.TP
+Only trace sockets where the process contains "server":
+#
+.B sofdsnoop \-n server
+.SH FIELDS
+.TP
+TIME(s)
+Time of SEDN/RECV actions, in seconds.
+.TP
+ACTION
+Operation on the fd SEND/RECV.
+.TP
+TID
+Process TID
+.TP
+COMM
+Parent process/command name.
+.TP
+SOCKET
+The socket carrier.
+.TP
+FD
+file descriptor number
+.TP
+NAME
+file name for SEND lines
+.SH SOURCE
+This is from bcc.
+.IP
+https://github.com/iovisor/bcc
+.PP
+Also look in the bcc distribution for a companion _examples.txt file containing
+example usage, output, and commentary for this tool.
+.SH OS
+Linux
+.SH STABILITY
+Unstable - in development.
+.SH AUTHOR
+Jiri Olsa
+.SH SEE ALSO
+opensnoop(1)
diff --git a/snapcraft/snapcraft.yaml b/snapcraft/snapcraft.yaml
index fcff9624d72e..e408686f2553 100644
--- a/snapcraft/snapcraft.yaml
+++ b/snapcraft/snapcraft.yaml
@@ -192,6 +192,9 @@ apps:
shmsnoop:
command: wrapper shmsnoop
aliases: [shmsnoop]
+ sofdsnoop:
+ command: wrapper sofdsnoop
+ aliases: [sofdsnoop]
phpcalls:
command: wrapper phpcalls
aliases: [phpcalls]
diff --git a/tests/python/test_tools_smoke.py b/tests/python/test_tools_smoke.py
index 3aad12eaa3af..f6fca6376740 100755
--- a/tests/python/test_tools_smoke.py
+++ b/tests/python/test_tools_smoke.py
@@ -266,6 +266,10 @@ class SmokeTests(TestCase):
def test_shmsnoop(self):
self.run_with_int("shmsnoop.py")

+ @skipUnless(kernel_version_ge(4,8), "requires kernel >= 4.8")
+ def test_sofdsnoop(self):
+ self.run_with_int("sofdsnoop.py")
+
def test_slabratetop(self):
self.run_with_duration("slabratetop.py 1 1")

diff --git a/tools/sofdsnoop.py b/tools/sofdsnoop.py
new file mode 100755
index 000000000000..ae5ab9f59f94
--- /dev/null
+++ b/tools/sofdsnoop.py
@@ -0,0 +1,355 @@
+#!/usr/bin/python
+# @lint-avoid-python-3-compatibility-imports
+#
+# sofdsnoop traces file descriptors passed via socket
+# For Linux, uses BCC, eBPF. Embedded C.
+#
+# USAGE: sofdsnoop
+#
+# Copyright (c) 2018 Jiri Olsa.
+# Licensed under the Apache License, Version 2.0 (the "License")
+#
+# 30-Jul-2018 Jiri Olsa Created this.
+
+from __future__ import print_function
+from bcc import ArgString, BPF
+import os
+import argparse
+import ctypes as ct
+from datetime import datetime, timedelta
+
+# arguments
+examples = """examples:
+ ./sofdsnoop # trace file descriptors passes
+ ./sofdsnoop -T # include timestamps
+ ./sofdsnoop -p 181 # only trace PID 181
+ ./sofdsnoop -t 123 # only trace TID 123
+ ./sofdsnoop -d 10 # trace for 10 seconds only
+ ./sofdsnoop -n main # only print process names containing "main"
+
+"""
+parser = argparse.ArgumentParser(
+ description="Trace file descriptors passed via socket",
+ formatter_class=argparse.RawDescriptionHelpFormatter,
+ epilog=examples)
+parser.add_argument("-T", "--timestamp", action="store_true",
+ help="include timestamp on output")
+parser.add_argument("-p", "--pid",
+ help="trace this PID only")
+parser.add_argument("-t", "--tid",
+ help="trace this TID only")
+parser.add_argument("-n", "--name",
+ type=ArgString,
+ help="only print process names containing this name")
+parser.add_argument("-d", "--duration",
+ help="total duration of trace in seconds")
+args = parser.parse_args()
+debug = 0
+
+ACTION_SEND=0
+ACTION_RECV=1
+MAX_FD=10
+
+if args.duration:
+ args.duration = timedelta(seconds=int(args.duration))
+
+# define BPF program
+bpf_text = """
+#include <uapi/linux/ptrace.h>
+#include <uapi/linux/limits.h>
+#include <linux/sched.h>
+#include <linux/socket.h>
+#include <net/sock.h>
+
+#define MAX_FD 10
+#define ACTION_SEND 0
+#define ACTION_RECV 1
+
+struct val_t {
+ u64 id;
+ u64 ts;
+ int action;
+ int sock_fd;
+ int fd_cnt;
+ int fd[MAX_FD];
+ char comm[TASK_COMM_LEN];
+};
+
+BPF_HASH(detach_ptr, u64, struct cmsghdr *);
+BPF_HASH(sock_fd, u64, int);
+BPF_PERF_OUTPUT(events);
+
+static void set_fd(int fd)
+{
+ u64 id = bpf_get_current_pid_tgid();
+
+ sock_fd.update(&id, &fd);
+}
+
+static int get_fd(void)
+{
+ u64 id = bpf_get_current_pid_tgid();
+ int *fd;
+
+ fd = sock_fd.lookup(&id);
+ return fd ? *fd : -1;
+}
+
+static void put_fd(void)
+{
+ u64 id = bpf_get_current_pid_tgid();
+
+ sock_fd.delete(&id);
+}
+
+static int sent_1(struct pt_regs *ctx, struct val_t *val, int num, void *data)
+{
+ val->fd_cnt = min(num, MAX_FD);
+
+ if (bpf_probe_read(&val->fd[0], MAX_FD * sizeof(int), data))
+ return -1;
+
+ events.perf_submit(ctx, val, sizeof(*val));
+ return 0;
+}
+
+#define SEND_1 \
+ if (sent_1(ctx, &val, num, (void *) data)) \
+ return 0; \
+ \
+ num -= MAX_FD; \
+ if (num < 0) \
+ return 0; \
+ \
+ data += MAX_FD;
+
+#define SEND_2 SEND_1 SEND_1
+#define SEND_4 SEND_2 SEND_2
+#define SEND_8 SEND_4 SEND_4
+#define SEND_260 SEND_8 SEND_8 SEND_8 SEND_2
+
+static int send(struct pt_regs *ctx, struct cmsghdr *cmsg, int action)
+{
+ struct val_t val = { 0 };
+ int *data, num, fd;
+ u64 tsp = bpf_ktime_get_ns();
+
+ data = (void *) ((char *) cmsg + sizeof(struct cmsghdr));
+ num = (cmsg->cmsg_len - sizeof(struct cmsghdr)) / sizeof(int);
+
+ val.id = bpf_get_current_pid_tgid();
+ val.action = action;
+ val.sock_fd = get_fd();
+ val.ts = tsp / 1000;
+
+ if (bpf_get_current_comm(&val.comm, sizeof(val.comm)) != 0)
+ return 0;
+
+ SEND_260
+ return 0;
+}
+
+static bool allow_pid(u64 id)
+{
+ u32 pid = id >> 32; // PID is higher part
+ u32 tid = id; // Cast and get the lower part
+
+ FILTER
+
+ return 1;
+}
+
+int trace_scm_send_entry(struct pt_regs *ctx, struct socket *sock, struct msghdr *hdr)
+{
+ struct cmsghdr *cmsg = NULL;
+
+ if (!allow_pid(bpf_get_current_pid_tgid()))
+ return 0;
+
+ if (hdr->msg_controllen >= sizeof(struct cmsghdr))
+ cmsg = hdr->msg_control;
+
+ if (!cmsg || (cmsg->cmsg_type != SCM_RIGHTS))
+ return 0;
+
+ return send(ctx, cmsg, ACTION_SEND);
+};
+
+int trace_scm_detach_fds_entry(struct pt_regs *ctx, struct msghdr *hdr)
+{
+ struct cmsghdr *cmsg = NULL;
+ u64 id = bpf_get_current_pid_tgid();
+
+ if (!allow_pid(id))
+ return 0;
+
+ if (hdr->msg_controllen >= sizeof(struct cmsghdr))
+ cmsg = hdr->msg_control;
+
+ if (!cmsg)
+ return 0;
+
+ detach_ptr.update(&id, &cmsg);
+ return 0;
+};
+
+int trace_scm_detach_fds_return(struct pt_regs *ctx)
+{
+ struct cmsghdr **cmsgp;
+ u64 id = bpf_get_current_pid_tgid();
+
+ if (!allow_pid(id))
+ return 0;
+
+ cmsgp = detach_ptr.lookup(&id);
+
+ if (!cmsgp)
+ return 0;
+
+ return send(ctx, *cmsgp, ACTION_RECV);
+}
+
+int trace_sendmsg_entry(struct pt_regs *ctx, struct pt_regs *_p)
+{
+ struct pt_regs p;
+ int fd;
+
+ if (!allow_pid(bpf_get_current_pid_tgid()))
+ return 0;
+
+ if (bpf_probe_read(&p, sizeof(p), (void *) _p))
+ return 0;
+
+ fd = (int) PT_REGS_PARM1(&p);
+
+ set_fd(fd);
+ return 0;
+}
+
+int trace_sendmsg_return(struct pt_regs *ctx)
+{
+ if (!allow_pid(bpf_get_current_pid_tgid()))
+ return 0;
+
+ put_fd();
+ return 0;
+}
+
+int trace_recvmsg_entry(struct pt_regs *ctx, struct pt_regs *_p)
+{
+ struct pt_regs p;
+ int fd;
+
+ if (!allow_pid(bpf_get_current_pid_tgid()))
+ return 0;
+
+ if (bpf_probe_read(&p, sizeof(p), (void *) _p))
+ return 0;
+
+ fd = (int) PT_REGS_PARM1(&p);
+
+ set_fd(fd);
+ return 0;
+}
+
+int trace_recvmsg_return(struct pt_regs *ctx)
+{
+ if (!allow_pid(bpf_get_current_pid_tgid()))
+ return 0;
+
+ put_fd();
+ return 0;
+}
+
+"""
+
+if args.tid: # TID trumps PID
+ bpf_text = bpf_text.replace('FILTER',
+ 'if (tid != %s) { return 0; }' % args.tid)
+elif args.pid:
+ bpf_text = bpf_text.replace('FILTER',
+ 'if (pid != %s) { return 0; }' % args.pid)
+else:
+ bpf_text = bpf_text.replace('FILTER', '')
+
+# initialize BPF
+b = BPF(text=bpf_text)
+
+syscall_fnname = b.get_syscall_fnname("sendmsg")
+if BPF.ksymname(syscall_fnname) != -1:
+ b.attach_kprobe(event=syscall_fnname, fn_name="trace_sendmsg_entry")
+ b.attach_kretprobe(event=syscall_fnname, fn_name="trace_sendmsg_return")
+
+syscall_fnname = b.get_syscall_fnname("recvmsg")
+if BPF.ksymname(syscall_fnname) != -1:
+ b.attach_kprobe(event=syscall_fnname, fn_name="trace_recvmsg_entry")
+ b.attach_kretprobe(event=syscall_fnname, fn_name="trace_recvmsg_return")
+
+b.attach_kprobe(event="__scm_send", fn_name="trace_scm_send_entry")
+b.attach_kprobe(event="scm_detach_fds", fn_name="trace_scm_detach_fds_entry")
+b.attach_kretprobe(event="scm_detach_fds", fn_name="trace_scm_detach_fds_return")
+
+TASK_COMM_LEN = 16 # linux/sched.h
+
+initial_ts = 0
+
+class Data(ct.Structure):
+ _fields_ = [
+ ("id", ct.c_ulonglong),
+ ("ts", ct.c_ulonglong),
+ ("action", ct.c_int),
+ ("sock_fd", ct.c_int),
+ ("fd_cnt", ct.c_int),
+ ("fd", ct.c_int * MAX_FD),
+ ("comm", ct.c_char * TASK_COMM_LEN),
+ ]
+
+# header
+if args.timestamp:
+ print("%-14s" % ("TIME(s)"), end="")
+print("%-6s %-6s %-16s %-25s %-5s %s" %
+ ("ACTION", "TID", "COMM", "SOCKET", "FD", "NAME"))
+
+def get_file(pid, fd):
+ proc = "/proc/%d/fd/%d" % (pid, fd)
+ try:
+ return os.readlink(proc)
+ except OSError as err:
+ return "N/A"
+
+# process event
+def print_event(cpu, data, size):
+ event = ct.cast(data, ct.POINTER(Data)).contents
+ tid = event.id & 0xffffffff;
+
+ cnt = min(MAX_FD, event.fd_cnt);
+
+ if args.name and bytes(args.name) not in event.comm:
+ return
+
+ for i in range(0, cnt):
+ global initial_ts
+
+ if not initial_ts:
+ initial_ts = event.ts
+
+ if args.timestamp:
+ delta = event.ts - initial_ts
+ print("%-14.9f" % (float(delta) / 1000000), end="")
+
+ print("%-6s %-6d %-16s " %
+ ("SEND" if event.action == ACTION_SEND else "RECV",
+ tid, event.comm.decode()), end = '')
+
+ sock = "%d:%s" % (event.sock_fd, get_file(tid, event.sock_fd))
+ print("%-25s " % sock, end = '')
+
+ fd = event.fd[i]
+ fd_file = get_file(tid, fd) if event.action == ACTION_SEND else ""
+ print("%-5d %s" % (fd, fd_file))
+
+# loop with callback to print_event
+b["events"].open_perf_buffer(print_event, page_cnt=64)
+start_time = datetime.now()
+while not args.duration or datetime.now() - start_time < args.duration:
+ b.perf_buffer_poll(timeout=1000)
diff --git a/tools/sofdsnoop_example.txt b/tools/sofdsnoop_example.txt
new file mode 100644
index 000000000000..740a26fdf4df
--- /dev/null
+++ b/tools/sofdsnoop_example.txt
@@ -0,0 +1,69 @@
+Demonstrations of sofdsnoop, the Linux eBPF/bcc version.
+
+sofdsnoop traces FDs passed through unix sockets
+
+# ./sofdsnoop.py
+ACTION TID COMM SOCKET FD NAME
+SEND 2576 Web Content 24:socket:[39763] 51 /dev/shm/org.mozilla.ipc.2576.23874
+RECV 2576 Web Content 49:socket:[809997] 51
+SEND 2576 Web Content 24:socket:[39763] 58 N/A
+RECV 2464 Gecko_IOThread 75:socket:[39753] 55
+
+Every file descriptor that is passed via unix sockets os displayed
+on separate line together with process info (TID/COMM columns),
+ACTION details (SEND/RECV), file descriptor number (FD) and its
+translation to file if available (NAME).
+
+The file descriptor (fd) value is bound to a process. The SEND
+lines display the fd value within the sending process. The RECV
+lines display the fd value of the sending process. That's why
+there's translation to name only on SEND lines, where we are
+able to find it in task proc records.
+
+This works by tracing sendmsg/recvmsg system calls to provide
+the socket fds, and scm_send_entry/scm_detach_fds to provide
+the file descriptor details.
+
+A -T option can be used to include a timestamp column,
+and a -n option to match on a command name. Regular
+expressions are allowed. For example, matching commands
+containing "server" with timestamps:
+
+# ./sofdsnoop.py -T -n Web
+TIME(s) ACTION TID COMM SOCKET FD NAME
+0.000000000 SEND 2576 Web Content 24:socket:[39763] 51 /dev/shm/org.mozilla.ipc.2576.25404 (deleted)
+0.000413000 RECV 2576 Web Content 49:/dev/shm/org.mozilla.ipc.2576.25404 (deleted) 51
+0.000558000 SEND 2576 Web Content 24:socket:[39763] 58 N/A
+0.000952000 SEND 2576 Web Content 24:socket:[39763] 58 socket:[817962]
+
+
+A -p option can be used to trace only selected process:
+
+# ./sofdsnoop.py -p 2576 -T
+TIME(s) ACTION TID COMM SOCKET FD NAME
+0.000000000 SEND 2576 Web Content 24:socket:[39763] 51 N/A
+0.000138000 RECV 2576 Web Content 49:N/A 5
+0.000191000 SEND 2576 Web Content 24:socket:[39763] 58 N/A
+0.000424000 RECV 2576 Web Content 51:/dev/shm/org.mozilla.ipc.2576.25319 (deleted) 49
+
+USAGE message:
+usage: sofdsnoop.py [-h] [-T] [-p PID] [-t TID] [-n NAME] [-d DURATION]
+
+Trace file descriptors passed via socket
+
+optional arguments:
+ -h, --help show this help message and exit
+ -T, --timestamp include timestamp on output
+ -p PID, --pid PID trace this PID only
+ -t TID, --tid TID trace this TID only
+ -n NAME, --name NAME only print process names containing this name
+ -d DURATION, --duration DURATION
+ total duration of trace in seconds
+
+examples:
+ ./sofdsnoop # trace file descriptors passes
+ ./sofdsnoop -T # include timestamps
+ ./sofdsnoop -p 181 # only trace PID 181
+ ./sofdsnoop -t 123 # only trace TID 123
+ ./sofdsnoop -d 10 # trace for 10 seconds only
+ ./sofdsnoop -n main # only print process names containing "main"
--
2.17.2


[RFC/PATCH 1/2] tools: Add shmsnoop to spy on shm* syscalls

Jiri Olsa
 

Adding shmsnoop tool to trace System V shared memory
syscalls: shmget, shmat, shmdt, shmctl

# ./shmsnoop.py
PID COMM SYS RET ARGs
19813 server SHMGET 10000 key: 0x78020001, size: 20, shmflg: 0x3b6 (IPC_CREAT|0666)
19813 server SHMAT 7f1cf8b1f000 shmid: 0x10000, shmaddr: 0x0, shmflg: 0x0
19816 client SHMGET 10000 key: 0x78020001, size: 20, shmflg: 0x1b6 (0666)
19816 client SHMAT 7f4fd8ee7000 shmid: 0x10000, shmaddr: 0x0, shmflg: 0x0
19816 client SHMDT 0 shmaddr: 0x7f4fd8ee7000
19813 server SHMDT 0 shmaddr: 0x7f1cf8b1f000
19813 server SHMCTL 0 shmid: 0x10000, cmd: 0, buf: 0x0

Every call the shm* syscall (SHM column) is displayed
on separate line together with process info (PID/COMM
columns) and argument details: return value (RET column)
and syscall arguments (ARGs column).

The ARGs column contains 'arg: value' couples that represent
given syscall arguments as described in their manpage.

It supports standard options to filter on pid/tid,
to specify duration of the trace and command name
filter, like:

./shmsnoop # trace all shm*() syscalls
./shmsnoop -T # include timestamps
./shmsnoop -p 181 # only trace PID 181
./shmsnoop -t 123 # only trace TID 123
./shmsnoop -d 10 # trace for 10 seconds only
./shmsnoop -n main # only print process names containing "main"
---
README.md | 1 +
man/man8/shmsnoop.8 | 74 +++++++
snapcraft/snapcraft.yaml | 3 +
tests/python/test_tools_smoke.py | 4 +
tools/shmsnoop.py | 335 +++++++++++++++++++++++++++++++
tools/shmsnoop_example.txt | 66 ++++++
6 files changed, 483 insertions(+)
create mode 100644 man/man8/shmsnoop.8
create mode 100755 tools/shmsnoop.py
create mode 100644 tools/shmsnoop_example.txt

diff --git a/README.md b/README.md
index 50d6db0a2fc2..92b90e5d7c4c 100644
--- a/README.md
+++ b/README.md
@@ -132,6 +132,7 @@ pair of .c and .py files, and some are directories of files.
- tools/[runqlat](tools/runqlat.py): Run queue (scheduler) latency as a histogram. [Examples](tools/runqlat_example.txt).
- tools/[runqlen](tools/runqlen.py): Run queue length as a histogram. [Examples](tools/runqlen_example.txt).
- tools/[runqslower](tools/runqslower.py): Trace long process scheduling delays. [Examples](tools/runqslower_example.txt).
+- tools/[shmsnoop](tools/shmsnoop.py): Trace System V shared memory syscalls. [Examples](tools/shmsnoop_example.txt).
- tools/[slabratetop](tools/slabratetop.py): Kernel SLAB/SLUB memory cache allocation rate top. [Examples](tools/slabratetop_example.txt).
- tools/[softirqs](tools/softirqs.py): Measure soft IRQ (soft interrupt) event time. [Examples](tools/softirqs_example.txt).
- tools/[solisten](tools/solisten.py): Trace TCP socket listen. [Examples](tools/solisten_example.txt).
diff --git a/man/man8/shmsnoop.8 b/man/man8/shmsnoop.8
new file mode 100644
index 000000000000..390974f6f7ea
--- /dev/null
+++ b/man/man8/shmsnoop.8
@@ -0,0 +1,74 @@
+.TH shmsnoop 8 "2018-09-24" "USER COMMANDS"
+.SH NAME
+shmsnoop \- Trace System V shared memory syscalls. Uses Linux eBPF/bcc.
+.SH SYNOPSIS
+.B shmsnoop [\-h] [\-T] [\-p] [\-t] [\-d DURATION] [\-n NAME]
+.SH DESCRIPTION
+shmsnoop traces System V shared memory syscalls: shmget, shmat, shmdt, shmctl
+
+Since this uses BPF, only the root user can use this tool.
+.SH REQUIREMENTS
+CONFIG_BPF and bcc.
+.SH OPTIONS
+.TP
+\-h
+Print usage message.
+.TP
+\-T
+Include a timestamp column.
+.TP
+\-p PID
+Trace this process ID only (filtered in-kernel).
+.TP
+\-t TID
+Trace this thread ID only (filtered in-kernel).
+.TP
+\-d DURATION
+Total duration of trace in seconds.
+.TP
+\-n NAME
+Only print command lines matching this command name (regex)
+.SH EXAMPLES
+.TP
+Trace all shm* syscalls:
+#
+.B shmsnoop
+.TP
+Trace all shm* syscalls, and include timestamps:
+#
+.B shmsnoop \-T
+.TP
+Only trace shm* syscalls where the process contains "server":
+#
+.B shmsnoop \-n server
+.SH FIELDS
+.TP
+TIME(s)
+Time of shm syscall return, in seconds.
+.TP
+PID
+Process ID
+.TP
+COMM
+Parent process/command name.
+.TP
+RET
+Return value of shm syscall.
+.TP
+ARGS
+"arg: value" couples that represent given syscall arguments as described in their manpage
+.SH SOURCE
+This is from bcc.
+.IP
+https://github.com/iovisor/bcc
+.PP
+Also look in the bcc distribution for a companion _examples.txt file containing
+example usage, output, and commentary for this tool.
+.SH OS
+Linux
+.SH STABILITY
+Unstable - in development.
+.SH AUTHOR
+Jiri Olsa
+.SH SEE ALSO
+opensnoop(1)
diff --git a/snapcraft/snapcraft.yaml b/snapcraft/snapcraft.yaml
index e4acdb27a102..fcff9624d72e 100644
--- a/snapcraft/snapcraft.yaml
+++ b/snapcraft/snapcraft.yaml
@@ -189,6 +189,9 @@ apps:
perlstat:
command: wrapper perlstat
aliases: [perlstat]
+ shmsnoop:
+ command: wrapper shmsnoop
+ aliases: [shmsnoop]
phpcalls:
command: wrapper phpcalls
aliases: [phpcalls]
diff --git a/tests/python/test_tools_smoke.py b/tests/python/test_tools_smoke.py
index ab80ecf250ec..3aad12eaa3af 100755
--- a/tests/python/test_tools_smoke.py
+++ b/tests/python/test_tools_smoke.py
@@ -262,6 +262,10 @@ class SmokeTests(TestCase):
def test_runqlen(self):
self.run_with_duration("runqlen.py 1 1")

+ @skipUnless(kernel_version_ge(4,8), "requires kernel >= 4.8")
+ def test_shmsnoop(self):
+ self.run_with_int("shmsnoop.py")
+
def test_slabratetop(self):
self.run_with_duration("slabratetop.py 1 1")

diff --git a/tools/shmsnoop.py b/tools/shmsnoop.py
new file mode 100755
index 000000000000..d82b2f5b76ff
--- /dev/null
+++ b/tools/shmsnoop.py
@@ -0,0 +1,335 @@
+#!/usr/bin/python
+# @lint-avoid-python-3-compatibility-imports
+#
+# shmsnoop Trace shm*() syscalls.
+# For Linux, uses BCC, eBPF. Embedded C.
+#
+# USAGE: shmsnoop [-h] [-T] [-x] [-p PID] [-d DURATION] [-t TID] [-n NAME]
+#
+# Copyright (c) 2018 Jiri Olsa.
+# Licensed under the Apache License, Version 2.0 (the "License")
+#
+# 08-Oct-2018 Jiri Olsa Created this.
+
+from __future__ import print_function
+from bcc import ArgString, BPF
+import argparse
+import ctypes as ct
+from datetime import datetime, timedelta
+
+# arguments
+examples = """examples:
+ ./shmsnoop # trace all shm*() syscalls
+ ./shmsnoop -T # include timestamps
+ ./shmsnoop -p 181 # only trace PID 181
+ ./shmsnoop -t 123 # only trace TID 123
+ ./shmsnoop -d 10 # trace for 10 seconds only
+ ./shmsnoop -n main # only print process names containing "main"
+"""
+parser = argparse.ArgumentParser(
+ description="Trace shm*() syscalls",
+ formatter_class=argparse.RawDescriptionHelpFormatter,
+ epilog=examples)
+parser.add_argument("-T", "--timestamp", action="store_true",
+ help="include timestamp on output")
+parser.add_argument("-p", "--pid",
+ help="trace this PID only")
+parser.add_argument("-t", "--tid",
+ help="trace this TID only")
+parser.add_argument("-d", "--duration",
+ help="total duration of trace in seconds")
+parser.add_argument("-n", "--name",
+ type=ArgString,
+ help="only print process names containing this name")
+parser.add_argument("--ebpf", action="store_true",
+ help=argparse.SUPPRESS)
+args = parser.parse_args()
+debug = 0
+if args.duration:
+ args.duration = timedelta(seconds=int(args.duration))
+
+# define BPF program
+bpf_text = """
+#include <uapi/linux/ptrace.h>
+#include <uapi/linux/limits.h>
+#include <linux/sched.h>
+
+struct val_t {
+ u64 id;
+ u64 ts;
+ int sys;
+ unsigned long key;
+ unsigned long size;
+ unsigned long shmflg;
+ unsigned long shmid;
+ unsigned long cmd;
+ unsigned long buf;
+ unsigned long shmaddr;
+ unsigned long ret;
+ char comm[TASK_COMM_LEN];
+};
+
+BPF_HASH(infotmp, u64, struct val_t);
+BPF_PERF_OUTPUT(events);
+
+enum {
+ SYS_SHMGET,
+ SYS_SHMAT,
+ SYS_SHMDT,
+ SYS_SHMCTL,
+};
+
+static int enter(struct val_t *val)
+{
+ u64 id = bpf_get_current_pid_tgid();
+ u32 pid = id >> 32; // PID is higher part
+ u32 tid = id; // Cast and get the lower part
+
+ FILTER
+
+ val->id = id;
+ infotmp.update(&id, val);
+ return 0;
+}
+
+int trace_return(struct pt_regs *ctx)
+{
+ u64 id = bpf_get_current_pid_tgid();
+ u64 tsp = bpf_ktime_get_ns();
+ struct val_t *val;
+
+ val = infotmp.lookup(&id);
+ if (val == 0)
+ return 0;
+
+ if (bpf_get_current_comm(&val->comm, sizeof(val->comm)) != 0)
+ goto out;
+
+ val->ts = tsp / 1000;
+ val->ret = PT_REGS_RC(ctx);
+ events.perf_submit(ctx, val, sizeof(*val));
+
+out:
+ infotmp.delete(&id);
+ return 0;
+}
+
+int trace_shmget(struct pt_regs *ctx, struct pt_regs *_p)
+{
+ struct val_t val = {
+ .sys = SYS_SHMGET,
+ };
+ struct pt_regs p;
+
+ if (bpf_probe_read(&p, sizeof(p), (void *) _p))
+ return 0;
+
+ val.key = PT_REGS_PARM1(&p);
+ val.size = PT_REGS_PARM2(&p);
+ val.shmflg = PT_REGS_PARM3(&p);
+ return enter(&val);
+};
+
+int trace_shmat(struct pt_regs *ctx, struct pt_regs *_p)
+{
+ struct val_t val = {
+ .sys = SYS_SHMAT,
+ };
+ struct pt_regs p;
+
+ if (bpf_probe_read(&p, sizeof(p), (void *) _p))
+ return 0;
+
+ val.shmid = PT_REGS_PARM1(&p);
+ val.shmaddr = PT_REGS_PARM2(&p);
+ val.shmflg = PT_REGS_PARM3(&p);
+ return enter(&val);
+};
+
+int trace_shmdt(struct pt_regs *ctx, struct pt_regs *_p)
+{
+ struct val_t val = {
+ .sys = SYS_SHMDT,
+ };
+ struct pt_regs p;
+
+ if (bpf_probe_read(&p, sizeof(p), (void *) _p))
+ return 0;
+
+ val.shmaddr = PT_REGS_PARM1(&p);
+ return enter(&val);
+};
+
+int trace_shmctl(struct pt_regs *ctx, struct pt_regs *_p)
+{
+ struct val_t val = {
+ .sys = SYS_SHMCTL,
+ };
+ struct pt_regs p;
+
+ if (bpf_probe_read(&p, sizeof(p), (void *) _p))
+ return 0;
+
+ val.shmid = PT_REGS_PARM1(&p);
+ val.cmd = PT_REGS_PARM2(&p);
+ val.buf = PT_REGS_PARM3(&p);
+ return enter(&val);
+};
+
+"""
+if args.tid: # TID trumps PID
+ bpf_text = bpf_text.replace('FILTER',
+ 'if (tid != %s) { return 0; }' % args.tid)
+elif args.pid:
+ bpf_text = bpf_text.replace('FILTER',
+ 'if (pid != %s) { return 0; }' % args.pid)
+else:
+ bpf_text = bpf_text.replace('FILTER', '')
+
+if debug or args.ebpf:
+ print(bpf_text)
+ if args.ebpf:
+ exit()
+
+# initialize BPF
+b = BPF(text=bpf_text)
+
+syscall_fnname = b.get_syscall_fnname("shmget")
+if BPF.ksymname(syscall_fnname) != -1:
+ b.attach_kprobe(event=syscall_fnname, fn_name="trace_shmget")
+ b.attach_kretprobe(event=syscall_fnname, fn_name="trace_return")
+
+syscall_fnname = b.get_syscall_fnname("shmat")
+if BPF.ksymname(syscall_fnname) != -1:
+ b.attach_kprobe(event=syscall_fnname, fn_name="trace_shmat")
+ b.attach_kretprobe(event=syscall_fnname, fn_name="trace_return")
+
+syscall_fnname = b.get_syscall_fnname("shmdt")
+if BPF.ksymname(syscall_fnname) != -1:
+ b.attach_kprobe(event=syscall_fnname, fn_name="trace_shmdt")
+ b.attach_kretprobe(event=syscall_fnname, fn_name="trace_return")
+
+syscall_fnname = b.get_syscall_fnname("shmctl")
+if BPF.ksymname(syscall_fnname) != -1:
+ b.attach_kprobe(event=syscall_fnname, fn_name="trace_shmctl")
+ b.attach_kretprobe(event=syscall_fnname, fn_name="trace_return")
+
+TASK_COMM_LEN = 16 # linux/sched.h
+
+SYS_SHMGET = 0
+SYS_SHMAT = 1
+SYS_SHMDT = 2
+SYS_SHMCTL = 3
+
+initial_ts = 0
+
+class Data(ct.Structure):
+ _fields_ = [
+ ("id", ct.c_ulonglong),
+ ("ts", ct.c_ulonglong),
+ ("sys", ct.c_int),
+ ("key", ct.c_ulonglong),
+ ("size", ct.c_ulonglong),
+ ("shmflg", ct.c_ulonglong),
+ ("shmid", ct.c_ulonglong),
+ ("cmd", ct.c_ulonglong),
+ ("buf", ct.c_ulonglong),
+ ("shmaddr", ct.c_ulonglong),
+ ("ret", ct.c_ulonglong),
+ ("comm", ct.c_char * TASK_COMM_LEN),
+ ]
+
+# header
+if args.timestamp:
+ print("%-14s" % ("TIME(s)"), end="")
+print("%-6s %-16s %6s %16s ARGs" %
+ ("TID" if args.tid else "PID", "COMM", "SYS", "RET"))
+
+def sys_name(sys):
+ switcher = {
+ SYS_SHMGET: "SHMGET",
+ SYS_SHMAT: "SHMAT",
+ SYS_SHMDT: "SHMDT",
+ SYS_SHMCTL: "SHMCTL",
+ }
+ return switcher.get(sys, "N/A")
+
+shmget_flags = [
+ { 'name' : 'IPC_CREAT', 'value' : 0o1000 },
+ { 'name' : 'IPC_EXCL', 'value' : 0o2000 },
+ { 'name' : 'SHM_HUGETLB', 'value' : 0o4000 },
+ { 'name' : 'SHM_HUGE_2MB', 'value' : 21 << 26 },
+ { 'name' : 'SHM_HUGE_1GB', 'value' : 30 << 26 },
+ { 'name' : 'SHM_NORESERVE', 'value' : 0o10000 },
+ { 'name' : 'SHM_EXEC', 'value' : 0o100000 }
+]
+
+shmat_flags = [
+ { 'name' : 'SHM_RDONLY', 'value' : 0o10000 },
+ { 'name' : 'SHM_RND', 'value' : 0o20000 },
+ { 'name' : 'SHM_REMAP', 'value' : 0o40000 },
+ { 'name' : 'SHM_EXEC', 'value' : 0o100000 },
+]
+
+def shmflg_str(val, flags):
+ cur = filter(lambda x : x['value'] & val, flags)
+ str = "0x%x" % val
+
+ if (not val):
+ return str
+
+ str += " ("
+ cnt = 0
+ for x in cur:
+ if cnt:
+ str += "|"
+ str += x['name']
+ val &= ~x['value']
+ cnt += 1
+
+ if val != 0 or not cnt:
+ if cnt:
+ str += "|"
+ str += "0%o" % val
+
+ str += ")"
+ return str
+
+# process event
+def print_event(cpu, data, size):
+ event = ct.cast(data, ct.POINTER(Data)).contents
+ global initial_ts
+
+ if not initial_ts:
+ initial_ts = event.ts
+
+ if args.name and bytes(args.name) not in event.comm:
+ return
+
+ if args.timestamp:
+ delta = event.ts - initial_ts
+ print("%-14.9f" % (float(delta) / 1000000), end="")
+
+ print("%-6d %-16s %6s %16lx " %
+ (event.id & 0xffffffff if args.tid else event.id >> 32,
+ event.comm.decode(), sys_name(event.sys), event.ret), end = '')
+
+ if event.sys == SYS_SHMGET:
+ print("key: 0x%lx, size: %lu, shmflg: %s" %
+ (event.key, event.size, shmflg_str(event.shmflg, shmget_flags)))
+
+ if event.sys == SYS_SHMAT:
+ print("shmid: 0x%lx, shmaddr: 0x%lx, shmflg: %s" %
+ (event.shmid, event.shmaddr, shmflg_str(event.shmflg, shmat_flags)))
+
+ if event.sys == SYS_SHMDT:
+ print("shmaddr: 0x%lx" % (event.shmaddr))
+
+ if event.sys == SYS_SHMCTL:
+ print("shmid: 0x%lx, cmd: %lu, buf: 0x%x" % (event.shmid, event.cmd, event.buf))
+
+# loop with callback to print_event
+b["events"].open_perf_buffer(print_event, page_cnt=64)
+start_time = datetime.now()
+while not args.duration or datetime.now() - start_time < args.duration:
+ b.perf_buffer_poll(timeout=1000)
diff --git a/tools/shmsnoop_example.txt b/tools/shmsnoop_example.txt
new file mode 100644
index 000000000000..53bbe7091ece
--- /dev/null
+++ b/tools/shmsnoop_example.txt
@@ -0,0 +1,66 @@
+Demonstrations of shmsnoop, the Linux eBPF/bcc version.
+
+shmsnoop traces shm*() syscalls, for example:
+
+# ./shmsnoop.py
+PID COMM SYS RET ARGs
+19813 server SHMGET 10000 key: 0x78020001, size: 20, shmflg: 0x3b6 (IPC_CREAT|0666)
+19813 server SHMAT 7f1cf8b1f000 shmid: 0x10000, shmaddr: 0x0, shmflg: 0x0
+19816 client SHMGET 10000 key: 0x78020001, size: 20, shmflg: 0x1b6 (0666)
+19816 client SHMAT 7f4fd8ee7000 shmid: 0x10000, shmaddr: 0x0, shmflg: 0x0
+19816 client SHMDT 0 shmaddr: 0x7f4fd8ee7000
+19813 server SHMDT 0 shmaddr: 0x7f1cf8b1f000
+19813 server SHMCTL 0 shmid: 0x10000, cmd: 0, buf: 0x0
+
+
+Every call the shm* syscall (SHM column) is displayed
+on separate line together with process info (PID/COMM
+columns) and argument details: return value (RET column)
+and syscall arguments (ARGs column).
+
+The ARGs column contains 'arg: value' couples that represent
+given syscall arguments as described in their manpage.
+
+This works by tracing shm* system calls and sending
+argument details to the python script.
+
+A -T option can be used to include a timestamp column,
+and a -n option to match on a command name. Regular
+expressions are allowed. For example, matching commands
+containing "server" with timestamps:
+
+# ./shmsnoop.py -T -n server
+TIME(s) PID COMM SYS RET ARGs
+0.563194000 19825 server SHMDT 0 shmaddr: 0x7f74362e4000
+0.563237000 19825 server SHMCTL 0 shmid: 0x18000, cmd: 0, buf: 0x0
+
+
+A -p option can be used to trace only selected process:
+
+# ./shmsnoop.py -p 19855
+PID COMM SYS RET ARGs
+19855 server SHMDT 0 shmaddr: 0x7f4329ff8000
+19855 server SHMCTL 0 shmid: 0x20000, cmd: 0, buf: 0x0
+
+USAGE message:
+# ./shmsnoop.py -h
+usage: shmsnoop.py [-h] [-T] [-p PID] [-t TID] [-d DURATION] [-n NAME]
+
+Trace shm*() syscalls
+
+optional arguments:
+ -h, --help show this help message and exit
+ -T, --timestamp include timestamp on output
+ -p PID, --pid PID trace this PID only
+ -t TID, --tid TID trace this TID only
+ -d DURATION, --duration DURATION
+ total duration of trace in seconds
+ -n NAME, --name NAME only print process names containing this name
+
+examples:
+ ./shmsnoop # trace all shm*() syscalls
+ ./shmsnoop -T # include timestamps
+ ./shmsnoop -p 181 # only trace PID 181
+ ./shmsnoop -t 123 # only trace TID 123
+ ./shmsnoop -d 10 # trace for 10 seconds only
+ ./shmsnoop -n main # only print process names containing "main"
--
2.17.2


[RFC 0/2] tools: Add shmsnoop/sofdsnoop tools

Jiri Olsa
 

hi,
this is RFC patchset on 2 tools that our customer wants
and perhaps would like to see them upstream.

The sofdsnoop traces FDs passed through unix sockets:

# ./sofdsnoop
ACTION TID COMM SOCKET FD NAME
SEND 2576 Web Content 24:socket:[39763] 51 /dev/shm/org.mozilla.ipc.2576.23874
RECV 2576 Web Content 49:socket:[809997] 51
SEND 2576 Web Content 24:socket:[39763] 58 N/A
...

The shmsnoop tool to trace System V shared memory syscalls:

# ./shmsnoop.py
PID COMM SYS RET ARGs
19813 server SHMGET 10000 key: 0x78020001, size: 20, shmflg: 0x3b6 (IPC_CREAT|0666)
19813 server SHMAT 7f1cf8b1f000 shmid: 0x10000, shmaddr: 0x0, shmflg: 0x0
19816 client SHMGET 10000 key: 0x78020001, size: 20, shmflg: 0x1b6 (0666)
...

They are still under testing/review, but I'd like to
have some upstream folks opinions on these.

thanks,
jirka


---
Jiri Olsa (2):
tools: Add shmsnoop to spy on shm* syscalls
tools: Add sofdsnoop to spy on fds passed through socket

README.md | 2 +
man/man8/shmsnoop.8 | 74 +++++++++++++++++++++++++
man/man8/spfdsnoop.8 | 85 +++++++++++++++++++++++++++++
snapcraft/snapcraft.yaml | 6 ++
tests/python/test_tools_smoke.py | 8 +++
tools/shmsnoop.py | 335 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
tools/shmsnoop_example.txt | 66 ++++++++++++++++++++++
tools/sofdsnoop.py | 355 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
tools/sofdsnoop_example.txt | 69 +++++++++++++++++++++++
9 files changed, 1000 insertions(+)
create mode 100644 man/man8/shmsnoop.8
create mode 100644 man/man8/spfdsnoop.8
create mode 100755 tools/shmsnoop.py
create mode 100644 tools/shmsnoop_example.txt
create mode 100755 tools/sofdsnoop.py
create mode 100644 tools/sofdsnoop_example.txt


Plans for libbpf packaging for distros?

Jesper Dangaard Brouer
 

Hi Jakub, Laura and Jiri Olsa (Cc others),

Subj: iovisor-dev] minutes: IO Visor TSC/Dev Meeting
(To: iovisor-dev <iovisor-dev@...>)
On Wed, 31 Oct 2018 14:30:25 -0700 "Brenden Blanco" <bblanco@...> wrote:
Jakub:
* working on getting libbpf packaged separately and released by distros
* FB has external mirror github.com/libbpf/libbpf
I noticed from the iovisor-dev minutes that you have plans for
packaging libbpf from the kernel tree. And via that I noticed the
github repo https://github.com/libbpf/libbpf, created by Yonghong Song.

I'm uncertain if it makes sense to maintain this library outside the
kernel git tree?!?

To be honest, I have very little knowledge about building RPMs and
other packages formats. I just wanted to point out that RHEL and
Fedora is now shipping bpftool, which also part of kernel git tree.

(Now I need input from Jiri Olsa and Laura to correct below statements:)

AFAIK bpftool RPM-package[1] is part of the "Source Package"
kernel-tools, which AFAIK gets build directly from the distro kernel
git tree via kernel.spec file. This also happens for perf
RPM-package[2] see section "Source Package" also point to kernel-tools.

So, my question is, can we ship/package libbpf in the same way?


Notice, that an increasing number of tools are linking/using libbpf,
e.g. perf, bpftool, Suricata, (selftests and samples/bpf).


[1] https://fedora.pkgs.org/28/fedora-x86_64/bpftool-4.16.0-1.fc28.x86_64.rpm.html
[2] https://fedora.pkgs.org/29/fedora-x86_64/perf-4.18.10-300.fc29.x86_64.rpm.html
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer


minutes: IO Visor TSC/Dev Meeting

Brenden Blanco
 

Hi All,

Thanks for joining in to the discussion today. Below are my notes from
the call. Hope everyone is looking forward to good discussions at the
LPC BPF track in Vancouver in two weeks! Since that overlaps with the
usual IO Visor call, we will have our next meeting in 4 weeks time
instead.

Thanks,
Brenden

=== Discussion ===

Brenden:
* tracking some buildbot failures
* will look at adding fedora 29 target
* to tag 0.8.0 soon

Yonghong:
* updated kernel headers
* working on llvm internals for BTF support

Alexei:
* for those presenting at LPC/bpf track, please send rough draft of slides by
tomorrow
* significant effort to do deduplication of dwarf->BTF of vmlinux symbols:
300MB->2MB
* feature: atomic spin lock/unlock callable from bpf programs
* 1 lock only
* no helper calls within critical section
* BTF will be mandatory for such programs
* more to be presented at plumbers

Daniel:
* Some security fixes
* LPC slides

John:
* Work on paper/slides for LPC
* Sockmap support is enabled in Cilium!
* (mellanox?) working to add upstream support in openssl for kTLS + sockmap

Jakub:
* working on getting libbpf packaged separately and released by distros
* FB has external mirror github.com/libbpf/libbpf


=== Attendees ===
Brenden Blanco
Paul Chaignon
Dan Siemon
Nic Viljoen
Alexei Starovoitov
Andy Gospodarek
Jakub Kicinski
Daniel Borkmann
Yonghong Song
Mauricio Vasquez
JohnF
Joe Stringer
Quentin Monnet
William Tu
Quillian Rutherford


Re: Loader complexity

Dan Siemon
 

On Wed, 2018-10-31 at 11:53 -0700, Alexei Starovoitov wrote:

reasonably simple? ;)
I suspect it doesn't support bpf-to-bpf calls and BTF, right?
These were major additions that folks with custom loader
will be missing.
A lot more stuff to come with BTF, relocations, etc.
I don't think it will be feasible to replicate the same
functionality
in other libraries.
Hence everyone is highly encouraged to use libbpf.
c++, go or any other wrappers can go on top.
Whether they're kept as part of libbpf repo or repo next to it is
tbd.
Correct. No BTF, I mostly had bpf calls going but didn't finish it
because we don't need it right now.

I understand the situation but this makes me a bit sad. I wrote the
loader originally to avoid 'unsafe' code in our project.
please define 'unsafe'
Not Go in the Go runtime :)


Re: Loader complexity

Alexei Starovoitov
 

On Wed, Oct 31, 2018 at 11:52 AM Dan Siemon <dan@...> wrote:

On Wed, 2018-10-31 at 11:42 -0700, Alexei Starovoitov wrote:
On Wed, Oct 31, 2018 at 11:36 AM Dan Siemon <dan@...>
wrote:
I was on the IOVisor call, there was discussion of making the
loader
more complicated (perf stuff) and work on libbpf to support this.
Does
this refer to doing the relocations etc in the ELF file?

We have our own loader written in Go for our bpf classifier use
cases
so I'm curious what these changes may mean for us. The current
implementation was reasonably simple. Is the expectation going
forward
that libbpf is always used? Will other implementations need to
track
and duplicate this complexity or is this backwards compatible?
reasonably simple? ;)
I suspect it doesn't support bpf-to-bpf calls and BTF, right?
These were major additions that folks with custom loader
will be missing.
A lot more stuff to come with BTF, relocations, etc.
I don't think it will be feasible to replicate the same functionality
in other libraries.
Hence everyone is highly encouraged to use libbpf.
c++, go or any other wrappers can go on top.
Whether they're kept as part of libbpf repo or repo next to it is
tbd.
Correct. No BTF, I mostly had bpf calls going but didn't finish it
because we don't need it right now.

I understand the situation but this makes me a bit sad. I wrote the
loader originally to avoid 'unsafe' code in our project.
please define 'unsafe'


Re: Loader complexity

Dan Siemon
 

On Wed, 2018-10-31 at 11:42 -0700, Alexei Starovoitov wrote:
On Wed, Oct 31, 2018 at 11:36 AM Dan Siemon <dan@...>
wrote:
I was on the IOVisor call, there was discussion of making the
loader
more complicated (perf stuff) and work on libbpf to support this.
Does
this refer to doing the relocations etc in the ELF file?

We have our own loader written in Go for our bpf classifier use
cases
so I'm curious what these changes may mean for us. The current
implementation was reasonably simple. Is the expectation going
forward
that libbpf is always used? Will other implementations need to
track
and duplicate this complexity or is this backwards compatible?
reasonably simple? ;)
I suspect it doesn't support bpf-to-bpf calls and BTF, right?
These were major additions that folks with custom loader
will be missing.
A lot more stuff to come with BTF, relocations, etc.
I don't think it will be feasible to replicate the same functionality
in other libraries.
Hence everyone is highly encouraged to use libbpf.
c++, go or any other wrappers can go on top.
Whether they're kept as part of libbpf repo or repo next to it is
tbd.
Correct. No BTF, I mostly had bpf calls going but didn't finish it
because we don't need it right now.

I understand the situation but this makes me a bit sad. I wrote the
loader originally to avoid 'unsafe' code in our project.


Re: Loader complexity

Alexei Starovoitov
 

On Wed, Oct 31, 2018 at 11:36 AM Dan Siemon <dan@...> wrote:

I was on the IOVisor call, there was discussion of making the loader
more complicated (perf stuff) and work on libbpf to support this. Does
this refer to doing the relocations etc in the ELF file?

We have our own loader written in Go for our bpf classifier use cases
so I'm curious what these changes may mean for us. The current
implementation was reasonably simple. Is the expectation going forward
that libbpf is always used? Will other implementations need to track
and duplicate this complexity or is this backwards compatible?
reasonably simple? ;)
I suspect it doesn't support bpf-to-bpf calls and BTF, right?
These were major additions that folks with custom loader
will be missing.
A lot more stuff to come with BTF, relocations, etc.
I don't think it will be feasible to replicate the same functionality
in other libraries.
Hence everyone is highly encouraged to use libbpf.
c++, go or any other wrappers can go on top.
Whether they're kept as part of libbpf repo or repo next to it is tbd.


Loader complexity

Dan Siemon
 

I was on the IOVisor call, there was discussion of making the loader
more complicated (perf stuff) and work on libbpf to support this. Does
this refer to doing the relocations etc in the ELF file?

We have our own loader written in Go for our bpf classifier use cases
so I'm curious what these changes may mean for us. The current
implementation was reasonably simple. Is the expectation going forward
that libbpf is always used? Will other implementations need to track
and duplicate this complexity or is this backwards compatible?

Thanks.


reminder: IO Visor TSC/Dev Meeting

Brenden Blanco
 

Please join us tomorrow for our bi-weekly call. As usual, this meeting is
open to everybody and completely optional.
You might be interested to join if:
You want to know what is going on in BPF land
You are doing something interesting yourself with BPF and would like to share
You want to know what the heck BPF is

=== IO Visor Dev/TSC Meeting ===

Every 2 weeks on Wednesday, from Wednesday, January 25, 2017, to no end date
11:00 am | Pacific Daylight Time (San Francisco, GMT-07:00) | 30 min

https://bluejeans.com/568677804/

https://www.timeanddate.com/worldclock/meetingdetails.html?year=2018&month=10&day=31&hour=18&min=0&sec=0&p1=900


Re: Any example of using BTF?

Martin KaFai Lau
 

On Mon, Oct 29, 2018 at 09:18:35PM +0800, Wang Jian wrote:
Hi all,

I am new to BPF/BTF but I am interested in it and want to try it.
So which llvm/clang version support it? And any examples/documents
will be appreciated.
Here is a doc on how to generate the BTF:
https://cilium.readthedocs.io/en/v1.3/bpf/#llvm

(search for BTF)


Any example of using BTF?

Wang Jian
 

Hi all,

I am new to BPF/BTF but I am interested in it and want to try it.
So which llvm/clang version support it? And any examples/documents
will be appreciated.

--
Regards,
Wang Jian


Re: unknown func bpf_clone_redirect#13

Kanthi P
 

Thanks Daniel, will try using BPF.

Just curious to know is there any reason why it is not supported for XDP, as only redirect(without clone) is supported for XDP.

Regards,
Kanthi


On Sun, Oct 21, 2018 at 1:42 AM Daniel Borkmann <daniel@...> wrote:
On 10/20/2018 09:03 PM, Kanthi P wrote:
> Hi,
>
> Has anyone been able to use bpf_clone_redirect method to clone the packet
> and forward to different interface?
>
> I used these programs and they work for bpf_redirect.
> Now I am trying to modify these to use bpf_clone_redirect, but when I run
> the program I get "unknown func bpf_clone_redirect#13" error.
>
> -       return bpf_redirect(*ifindex, 0);
> +       return bpf_clone_redirect(data, *ifindex, 0);
>
> I see bpf_clone_redirect in bpf.h, not sure what is that I am missing.
>
> https://github.com/torvalds/linux/blob/master/samples/bpf/xdp_redirect_kern.c
> https://github.com/torvalds/linux/blob/master/samples/bpf/xdp_redirect_user.c
>
> Can someone please help figure it out?

They only work for tc/BPF program types as they are based on sk_buff and not xdp_buff.

> Thanks,
> Kanthi
>
>
>
>


Re: unknown func bpf_clone_redirect#13

Daniel Borkmann
 

On 10/20/2018 09:03 PM, Kanthi P wrote:
Hi,

Has anyone been able to use bpf_clone_redirect method to clone the packet
and forward to different interface?

I used these programs and they work for bpf_redirect.
Now I am trying to modify these to use bpf_clone_redirect, but when I run
the program I get "unknown func bpf_clone_redirect#13" error.

- return bpf_redirect(*ifindex, 0);
+ return bpf_clone_redirect(data, *ifindex, 0);

I see bpf_clone_redirect in bpf.h, not sure what is that I am missing.

https://github.com/torvalds/linux/blob/master/samples/bpf/xdp_redirect_kern.c
https://github.com/torvalds/linux/blob/master/samples/bpf/xdp_redirect_user.c

Can someone please help figure it out?
They only work for tc/BPF program types as they are based on sk_buff and not xdp_buff.

Thanks,
Kanthi




unknown func bpf_clone_redirect#13

Kanthi P
 

Hi,

Has anyone been able to use bpf_clone_redirect method to clone the packet and forward to different interface?

I used these programs and they work for bpf_redirect.
Now I am trying to modify these to use bpf_clone_redirect, but when I run the program I get "unknown func bpf_clone_redirect#13" error.

-       return bpf_redirect(*ifindex, 0);
+       return bpf_clone_redirect(data, *ifindex, 0);

I see bpf_clone_redirect in bpf.h, not sure what is that I am missing.


Can someone please help figure it out?

Thanks,
Kanthi


minutes: IO Visor TSC/Dev Meeting

Brenden Blanco
 

Hi All,

Thanks for dialing in today. As usual, here are my notes.

=== Discussion ===
Alexei
* bpf-next closing this weekend, to remain closed during merge window
- get your last minute fixes in
* schedule for bpf microconference is near-finalized, availabe on the website

Daniel
* ktls and sockmap changes have been merged
- few remaining fixes being worked on
* some perf and libbpf improvements

John
* ktls and sockmap work as well
- currently adding support for it into cilium
- integrating into CI infrastructure
- adding metadata push support similar to xdp approach

Jiong
* more work on 32 bit support in verifier
- identified a couple regressions, fixing those
- how to scale verifier implementation to large numbers of instructions?
- will work on this

William
* looking at xdp support in vmxnet3 driver

Brendan
* bpftrace launched!
- github.com/iovisor/bpftrace
- new issues being found, fixed
- recently fixed compilation for llvm7
* being used to find real issues in netflix
- fd leak detector
* some kernel feature requests identified:
- ppid
- access to rusage statistics

Mauricio
* Will split out patch to just include stackmap, other pieces still being
discussed

Joe
* writing tests for sockmap lookup patches
- possible to have socket lookup from xdp?

=== Attendees ===
Brenden Blanco
Alexei
Daniel Borkmann
Jakub Kicinski
Joe Stringer
Nic Viljoen
Paul Chaignon
Quentin Monnet
Jiong Wang
Yifeng Sun
Andy Gospodarek
Brendan Gregg
John F
William Tu
Mauricio Vasquez
Martin Lau

501 - 520 of 2021