Skip to content

eks-addons-vpc-cni

github

minor version
eks-addons-vpc-cni-png-1.png

latest version
20230731 update: 1.13.3-eksbuild.1
eks-addons-vpc-cni-png-2.png

Upgrading (or downgrading) the VPC CNI version should result in no downtime. Existing pods should not be affected and will not lose network connectivity. New pods will be in pending state until the VPC CNI is fully initialized and can assign pod IP addresses. In v1.12.0+, VPC CNI state is restored via an on-disk file: /var/run/aws-node/ipam.json. In lower versions, state is restored via calls to container runtime.

Updating add-on

from webui (prefer)

works
select PRESERVE

from cli

not success

  • check addon version

    CLUSTER_NAME=ekscluster1
    export AWS_DEFAULT_REGION=us-east-2
    aws eks describe-addon --cluster-name ${CLUSTER_NAME} \
    --addon-name vpc-cni --query addon.addonVersion --output text
    

  • backup

    kubectl get daemonset aws-node -n kube-system -o yaml > aws-k8s-cni-old.yaml
    

  • upgrade

    SOURCE_VERSION=v1.13.2-eksbuild.1
    TARGET_VERSION=v1.13.3-eksbuild.1
    aws eks update-addon --cluster-name ${CLUSTER_NAME} \
    --addon-name vpc-cni --addon-version ${TARGET_VERSION} \
    --resolve-conflicts PRESERVE 
    

re-install

# arn:aws:iam::xxx:role/eksctl-ekscluster1-addon-vpc-cni-Role1

# --configuration-values '{"env":{"ENABLE_IPv4":"true","ENABLE_IPv6":"false"}}'

aws eks create-addon --cluster-name ${CLUSTER_NAME} \
--addon-name vpc-cni --addon-version v1.13.2-eksbuild.1 \
--service-account-role-arn arn:aws:iam::xxx:role/eksctl-ekscluster1-addon-vpc-cni-Role1 \
--resolve-conflicts OVERWRITE

# aws eks delete-addon --cluster-name ekscluster1 --addon-name vpc-cni

additional