How To Proxy TCP Traffic with a Load-balancer Service
Estimated time to read: 3 minutes
In this tutorial you will learn how to setup a load-balancer service to proxy TCP traffic.
In most use cases you'll use an Ingress Nginx Controller or a Kubernetes HTTPRoute to proxy traffic to a Pod, but these are optimized for HTTP traffic.
We will use the OpenSSH server as an example. And although OpenSSH server is not a service you are likely to deploy to a Kubernetes cluster, it illustrates on how to proxy TCP traffic without using more complicated ingress controllers.
Prerequisites:
In this tutorial we use the following tools:
- kubectl (https://kubernetes.io/docs/tasks/tools/)
It's required to have it installed before beginning.
The tutorial will be split into three parts:
- Deploy Resources
- Limit Access
- Cleanup
Deploy Resources
1. Create file openssh-configmap.yaml
containing the openssh server configuration, replace at least the user password:
apiVersion: v1
kind: ConfigMap
metadata:
name: openssh
namespace: default
data:
PUID: "1000"
PGID: "1000"
TZ: "Etc/UTC"
SUDO_ACCESS: "false"
PASSWORD_ACCESS: "true"
USER_PASSWORD: "replace with a secure password"
USER_NAME: git
2. Create file openssh-deployment.yaml
containing the openssh server deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: openssh
namespace: default
spec:
selector:
matchLabels:
app: openssh
template:
metadata:
labels:
app: openssh
spec:
containers:
- name: openssh
image: lscr.io/linuxserver/openssh-server:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 2222
envFrom:
- configMapRef:
name: openssh
Info
The pod will listen on port 2222 for incomming SSH connections.
3. Create file openssh-service.yaml
containing a service load-balancer:
kind: Service
apiVersion: v1
metadata:
name: openssh
namespace: default
annotations:
loadbalancer.openstack.org/lb-method: SOURCE_IP
# Disable, not supported in SSH connections
loadbalancer.openstack.org/proxy-protocol: "false"
# For demo purpose we do not need a health monitor
loadbalancer.openstack.org/enable-health-monitor: "false"
# Extend timeouts to 300 milisec as SSH sessions tend to be much longer than HTTP
loadbalancer.openstack.org/timeout-member-connect: "300000"
loadbalancer.openstack.org/timeout-member-data: "300000"
loadbalancer.openstack.org/timeout-client-data: "300000"
spec:
type: LoadBalancer
selector:
app: openssh
ports:
- protocol: TCP
port: 2222
targetPort: 2222
Info
The service will create a load-balancer with a listener accepting traffic on port 2222 and proxy that traffic to the openssh service on port 2222.
4. Deploy all the required resources:
5. Confirm the pod and service is created:
kubectl get pods
NAME READY STATUS RESTARTS AGE
openssh-6ccd444776-x7khr 1/1 Running 0 2m
kubectl get service openssh
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
openssh LoadBalancer 100.71.44.192 81.24.6.166 2222:31253/TCP 12m
Info
The load-balancer was assigned address 81.24.6.166 which is the public IP to connect to.
6. Confirm we can connect to the openssh server:
Our openssh server in Kubernetes is now accessible from the public internet. Continue with the tutorial to limit access.
Limit Access
Access to the openssh service can be easily limited by extending the load-balancer service with some source ranges. This will block unwanted parties connecting to the openssh server.
1. Add the following excerpt to openssh-service.yaml
, replace the source address with your own:
2. Apply your changes:
Access to the openssh server should now be limited to only the given addresses in the load-balancer service.
For more load-balancer options, see the OpenStack Cloud Controller documentation.
Cleanup
To wrap it up, delete all previously created resources: