Performance with sockhash in istio/k8s


Forrest Chen
 

Cilium has an idea to accelerate packet forward performance using sockops & sockhash when using istio service mesh, the code is here. But this function is heavily coupling with cilium codebase so I would like to extract the sockhash part from cilium. I find a demo code https://github.com/zachidan/ebpf-sockops and try to see if it really improve the performance. My test case is from https://github.com/istio/tools/tree/master/perf/benchmark . In this case, I setup two pods, fortio client and fortio server. And generate packet from client using kubectl -n $NAMESPACE exec -it $client_pod  -- fortio load -c 1 -qps 10000 -t 30s -a -r 0.00005 -httpbufferkb=128 "http://$server_svc_ip:8080/echo?size=1024", the qps decrease sharply from 6k to 200+ when apply sockmap prog. If I enter into the server pod and test using fortio load -c 1 -qps 10000 -t 30s -a -r 0.00005 -httpbufferkb=128 "http://127.0.0.1:8080/echo?size=1024"the qps increase from 6k to 9k.
In addition, I also override the bpf_redir function which always return SK_PASS and not call msg_redirect_hash,
__section("sk_msg")
int bpf_redir_proxy(struct sk_msg_md *msg)
{
    if (1)
        return SK_PASS;
    ...
    ...
}
and the qps also about 200+, so I think it is because the call to bpf_redir_proxy is expensive so the qps descrease sharply?

I also enter into the fortio server container and running test using fortio load -c 1 -qps 10000 -t 30s -a -r 0.00005 -httpbufferkb=128 "http://172.16.2.70:8080/echo?size=1024", here the ip 172.16.2.70 is the server IP(local ip), the result shows also 200+ qps…In this case, packet from client wiil first be redirected to envoy proxy, then the envoy proxy will send the packet to the server with dst address 127.0.0.1:8080

Why would this happen? How should I debug this? Need your help

Thanks,
forrest chen