Performance with sockhash in istio/k8s

Forrest Chen

Cilium has an idea to accelerate packet forward performance using sockops & sockhash when using istio service mesh, the code is here. But this function is heavily coupling with cilium codebase so I would like to extract the sockhash part from cilium. I find a demo code and try to see if it really improve the performance. My test case is from . In this case, I setup two pods, fortio client and fortio server. And generate packet from client using kubectl -n $NAMESPACE exec -it $client_pod  -- fortio load -c 1 -qps 10000 -t 30s -a -r 0.00005 -httpbufferkb=128 "http://$server_svc_ip:8080/echo?size=1024", the qps decrease sharply from 6k to 200+ when apply sockmap prog. If I enter into the server pod and test using fortio load -c 1 -qps 10000 -t 30s -a -r 0.00005 -httpbufferkb=128 ""the qps increase from 6k to 9k.
In addition, I also override the bpf_redir function which always return SK_PASS and not call msg_redirect_hash,
int bpf_redir_proxy(struct sk_msg_md *msg)
    if (1)
        return SK_PASS;
and the qps also about 200+, so I think it is because the call to bpf_redir_proxy is expensive so the qps descrease sharply?

I also enter into the fortio server container and running test using fortio load -c 1 -qps 10000 -t 30s -a -r 0.00005 -httpbufferkb=128 "", here the ip is the server IP(local ip), the result shows also 200+ qps…In this case, packet from client wiil first be redirected to envoy proxy, then the envoy proxy will send the packet to the server with dst address

Why would this happen? How should I debug this? Need your help

forrest chen