• 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle
  • Ah yes, I see. Because TCP has no SNI built-in this is not really possible.

    You could try IPv6, as within even a single /64 routable prefix you can choose the address section freely. Also take a look at overlay-vpn solutions like Netbird: They allow you to offer you multiple clients, which you could use to assign multiple IPv4 to your server and then routing them differently (you mentioned installing client software before)…

    Finally, I’m not sure why you would inject Treafik into the networking chain. In the end is the direct, kernel-space connection always faster than having an user-space proxy in between.


  • Okay, I’ll try explaining it. Yes, there is especially for this very little documentation, so… Yeah.

    You start by installing kube-vip into your cluster. Make sure to configure it correctly, so the uplink interface of your workers is being used for the vip, but not e.g. internal ones (see the env vars “vip_interface”). Make sure to enable service based functions and the respective election mechanism (“svc_enable”, “vip_leaderelection”). I would also recommend the ARP usage, because others I’ve never tested.

    Then you create a new “LoadBalancer”-service in k8s, on which you also set the “loadBalancerIP” field with the desired IPv4-VIP. Due to the previous kube-vip configuration, it should pick that up. You may take a look into its operators logs to learn more.

    Theoretically that’s it. Now one of your nodes will start serving the service-port under the vip. The service may target every TCP/UDP traffic, not only Traefik.

    There is one more thing: The field “externalTrafficPolicy” on the LB-service allows you to disable any kind of internal routing via your CNI if set to “local”, so you will even be able to see the real source IPv4 of your clients. Be careful with this on non kube-vip services, as nodes without the targeted pods will not be able to serve the traffic. Kube-vip only promotes nodes to serve the vip, if it also serves the pod targeted by it (see its docs/config).








  • Most common issue would be something with your system memory. I could imagine that this caused the timeout of your cpu, which waited for the startup code, which never arrived.

    In case you want to test that, swap your memory sticks around. Or tell the kernel to ignore that cpu (see command line arguments of the kernel).