I am attempting to cost-save by changing my Kubernetes cluster on AWS to IPv6 (since IPv4 now costs money). I execute the following commands for a basic deployment:
kops create cluster --ipv6 \ --name=$NAME \ --ssh-public-key ~/.ssh/id_rsa.pub \ --cloud=aws \ --node-count 3 \ --zones us-east-2a,us-east-2b,us-east-2c \ --master-zones us-east-2a,us-east-2b,us-east-2ckops update cluster --name $NAME --yes —admin
I see 3 masters and 3 nodes created in my EC2 console, and am able to access the Kubernetes API. However, kubectl get nodes
only ever shows the 3 masters joining the cluster. Normally I'd SSH into the nodes and view the logs at this point, but that does not work either:
ssh ubuntu@<ipv6> ssh: connect to host <ipv6> port 22: Operation timed out
- I can see that the SSH secret is attached to all 6 EC2 instances
- The EC2 instances'"Security" tab shows an Port 22 inbound "from any ipv6" (
::/0
) rule - My computer has an IPv6 address, and IPv6 is enabled by the ISP & router
- A
traceroute6
to the IPv6 address succeeds
FWIW, a rolling-update shows:
kops rolling-update cluster --yes ✔ 3.3.1 Ruby 07:50:30 AM NAME STATUS NEEDUPDATE READY MIN TARGET MAX NODEScontrol-plane-us-east-2a Ready 0 1 1 1 1 1control-plane-us-east-2b Ready 0 1 1 1 1 1control-plane-us-east-2c Ready 0 1 1 1 1 1nodes-us-east-2a Ready 0 1 1 1 1 0nodes-us-east-2b Ready 0 1 1 1 1 0nodes-us-east-2c Ready 0 1 1 1 1 0No rolling-update required.
And the Kubectl get nodes
:
i-039068b02073c449b Ready control-plane 9m15s v1.30.2i-0c5a8f54983375d89 Ready control-plane 9m29s v1.30.2i-0d959d418132a55fa Ready control-plane 9m22s v1.30.2
And kops version
:
Client version: 1.30.1