• Kubernetes
  • Unhandled Error" err="couldn't get current server API group list

$ kubectl get nodes

E0226 13:37:41.813073 59589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E0226 13:37:42.392980 59589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E0226 13:37:42.960692 59589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E0226 13:37:43.531081 59589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E0226 13:37:44.099237 59589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
error: You must be logged in to the server (the server has asked for the client to provide credentials)

Yes, you can increase the debug verbosity of kubectl using the --v flag.

How to Run kubectl get nodes with Debug Logging

kubectl get nodes --v=10

This will output detailed logs, including:

  • Requests sent to the Kubernetes API server
  • Authentication failures
  • Any authorization issues
  • Networking problems

Other Debug Levels

You can set different verbosity levels (--v=0 to --v=10):

  • --v=4: Basic debugging
  • --v=6: API request details
  • --v=8: HTTP request & response data
  • --v=10: Maximum debug logs (full trace)

Alternative Commands for Debugging

Check API Server Connectivity

kubectl cluster-info --v=10

Check Authorization Issues

kubectl auth can-i list nodes --all-namespaces --v=10

List Kubernetes Contexts with Debug

kubectl config get-contexts --v=10

How to Analyze the Output

  • Look for authentication errors (e.g., Unauthorized, 403 Forbidden)
  • Check if the request reaches the correct API server
  • Verify if the IAM role is being used correctly

Once you run kubectl get nodes --v=10, let me know the errors you see! 🚀

The error "the server has asked for the client to provide credentials" means your federated IAM role is not properly authenticating with EKS. Since you have added IAM access via the Access tab in EKS, let's go through the updated debugging steps for AWS 2025.


🚀 Step 1: Confirm Your IAM Role is in EKS Access Entries

  1. Go to AWS Console → EKS → Your Cluster (evp-dev-eks)
  2. Click the Access tab.
  3. Check IAM access entries:
    • Your Federated IAM Role ARN should be listed.
    • It should be assigned Administrator Access (system:masters) or another appropriate Kubernetes role.

If your role is missing, add it as described in Step 2 of my previous response.


🚀 Step 2: Ensure IAM Role Has Required EKS Permissions

Even if the role is added, it must have the right AWS permissions.

✅ Verify the IAM Role has these policies

Run:

aws iam list-attached-role-policies --role-name <Federated-Role-Name>

It must include:

  • ✅ AmazonEKSClusterPolicy
  • ✅ AmazonEKSWorkerNodePolicy
  • ✅ AmazonEKSServicePolicy
  • ✅ AmazonEKSAccessKubernetesApi

✅ If any policy is missing, attach it:

aws iam attach-role-policy --role-name <Federated-Role-Name> --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy

(Repeat for each missing policy.)


🚀 Step 3: Regenerate kubeconfig with the Correct IAM Role

Since federated users authenticate differently, explicitly specify the IAM role when generating kubeconfig.

aws eks update-kubeconfig --name evp-dev-eks --region ap-northeast-1 --role-arn arn:aws:iam::329599659699:role/<Federated-Role-Name>

Then test:

kubectl get nodes

🚀 Step 4: Manually Test Authentication Token

If kubectl still fails, test authentication without kubeconfig by manually retrieving a token:

aws eks get-token --cluster-name evp-dev-eks --region ap-northeast-1 --role-arn arn:aws:iam::329599659699:role/<Federated-Role-Name> | jq -r '.status.token'
  • If this fails, your IAM role doesn't have eks:DescribeCluster permissions.
  • If this succeeds, try running kubectl using the token:
kubectl get nodes --token $(aws eks get-token --cluster-name evp-dev-eks --region ap-northeast-1 --role-arn arn:aws:iam::329599659699:role/<Federated-Role-Name> | jq -r '.status.token')

🚀 Step 5: Verify Kubernetes Cluster API Connectivity

Check if your cluster is reachable:

kubectl cluster-info

Manually test access to the API server:

curl -k $(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
  • If 403 Forbidden → IAM Role Issue.
  • If Connection Refused → EKS API might be down.

🚀 Step 6: Clear Cached Credentials & Restart Terminal

If authentication is still failing, remove any old credentials:

rm -rf ~/.kube/cache
rm -rf ~/.kube/http-cache
aws sts get-caller-identity

Then restart your terminal and re-run:

aws eks update-kubeconfig --name evp-dev-eks --region ap-northeast-1 --role-arn arn:aws:iam::329599659699:role/<Federated-Role-Name>
kubectl get nodes

🎯 Final Debugging

If none of the above steps work, run this debug command and share the output:

kubectl get nodes --v=9

🚀 Let me know if this helps!