The error "the server has asked for the client to provide credentials" means your federated IAM role is not properly authenticating with EKS. Since you have added IAM access via the Access tab in EKS, let's go through the updated debugging steps for AWS 2025.
🚀 Step 1: Confirm Your IAM Role is in EKS Access Entries
- Go to AWS Console → EKS → Your Cluster (
evp-dev-eks
)
- Click the Access tab.
- Check IAM access entries:
- Your Federated IAM Role ARN should be listed.
- It should be assigned Administrator Access (
system:masters
) or another appropriate Kubernetes role.
If your role is missing, add it as described in Step 2 of my previous response.
🚀 Step 2: Ensure IAM Role Has Required EKS Permissions
Even if the role is added, it must have the right AWS permissions.
✅ Verify the IAM Role has these policies
Run:
aws iam list-attached-role-policies --role-name <Federated-Role-Name>
It must include:
- ✅ AmazonEKSClusterPolicy
- ✅ AmazonEKSWorkerNodePolicy
- ✅ AmazonEKSServicePolicy
- ✅ AmazonEKSAccessKubernetesApi
✅ If any policy is missing, attach it:
aws iam attach-role-policy --role-name <Federated-Role-Name> --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
(Repeat for each missing policy.)
🚀 Step 3: Regenerate kubeconfig
with the Correct IAM Role
Since federated users authenticate differently, explicitly specify the IAM role when generating kubeconfig
.
aws eks update-kubeconfig --name evp-dev-eks --region ap-northeast-1 --role-arn arn:aws:iam::329599659699:role/<Federated-Role-Name>
Then test:
kubectl get nodes
🚀 Step 4: Manually Test Authentication Token
If kubectl
still fails, test authentication without kubeconfig
by manually retrieving a token:
aws eks get-token --cluster-name evp-dev-eks --region ap-northeast-1 --role-arn arn:aws:iam::329599659699:role/<Federated-Role-Name> | jq -r '.status.token'
- If this fails, your IAM role doesn't have
eks:DescribeCluster
permissions.
- If this succeeds, try running
kubectl
using the token:
kubectl get nodes --token $(aws eks get-token --cluster-name evp-dev-eks --region ap-northeast-1 --role-arn arn:aws:iam::329599659699:role/<Federated-Role-Name> | jq -r '.status.token')
🚀 Step 5: Verify Kubernetes Cluster API Connectivity
Check if your cluster is reachable:
kubectl cluster-info
Manually test access to the API server:
curl -k $(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
- If 403 Forbidden → IAM Role Issue.
- If Connection Refused → EKS API might be down.
🚀 Step 6: Clear Cached Credentials & Restart Terminal
If authentication is still failing, remove any old credentials:
rm -rf ~/.kube/cache
rm -rf ~/.kube/http-cache
aws sts get-caller-identity
Then restart your terminal and re-run:
aws eks update-kubeconfig --name evp-dev-eks --region ap-northeast-1 --role-arn arn:aws:iam::329599659699:role/<Federated-Role-Name>
kubectl get nodes
🎯 Final Debugging
If none of the above steps work, run this debug command and share the output:
kubectl get nodes --v=9
🚀 Let me know if this helps!