Check if all containers are running, ready and healthy
Knative Serving Components
$ kubectl get pods -n knative-serving
NAME READY STATUS RESTARTS AGE
activator-6b9dc4c9db-cl56b 1/1 Running 0 2m
autoscaler-77f9b75856-f88qw 1/1 Running 0 2m
controller-7dcb56fdb6-dbzrp 1/1 Running 0 2m
domain-mapping-6bb8f95654-c575d 1/1 Running 0 2m
domainmapping-webhook-c77dcfcfb-hg2wv 1/1 Running 0 2m
webhook-78dc6ddddb-6868n 1/1 Running 0 2m
Knative Serving Networking Layer
$ kubectl get pods -n knative-serving
NAME READY STATUS RESTARTS AGE
net-istio-controller-ccc455b58-f98ld 1/1 Running 0 19s
net-istio-webhook-7558dbfc64-5jmt6 1/1 Running 0 19s
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-c7b9f6477-bgr6q 1/1 Running 0 44s
istiod-79d65bf5f4-5zvtj 1/1 Running 0 29s
$ kubectl get pods -n knative-serving
NAME READY STATUS RESTARTS AGE
net-kourier-controller-5fcbb6d996-fprpd 1/1 Running 0 103s
$ kubectl get pods -n kourier-system
NAME READY STATUS RESTARTS AGE
3scale-kourier-gateway-86b9f6dc44-xpn6h 1/1 Running 0 2m22s
$ kubectl get pods -n contour-external
NAME READY STATUS RESTARTS AGE
contour-7b995cdb68-jg5s8 1/1 Running 0 41s
contour-certgen-v1.24.2-zmr9r 0/1 Completed 0 41s
envoy-xkzck 2/2 Running 0 41s
$ kubectl get pods -n contour-external
NAME READY STATUS RESTARTS AGE
contour-7b995cdb68-jg5s8 1/1 Running 0 41s
contour-certgen-v1.24.2-zmr9r 0/1 Completed 0 41s
envoy-xkzck 2/2 Running 0 41s
$ kubectl get pods -n contour-internal
NAME READY STATUS RESTARTS AGE
contour-57fcf576fd-wb57c 1/1 Running 0 55s
contour-certgen-v1.24.2-gqgrx 0/1 Completed 0 55s
envoy-rht69 2/2 Running 0 55s
Knative Eventing
$ kubectl get pods -n knative-eventing
NAME READY STATUS RESTARTS AGE
eventing-controller-bb8b689c4-lk6pq 1/1 Running 0 41s
eventing-webhook-577bb88ccd-hcz5p 1/1 Running 0 41s
Check if there are any errors logged in the Knative components
$ kubectl logs -n knative-serving <pod-name>
$ kubectl logs -n knative-eventing <pod-name>
$ kubectl logs -n <ingress-namespaces> <pod-namespaces> # see above for the relevant namespaces
For example
$ kubectl logs -n knative-serving activator-6b9dc4c9db-cl56b
2023/05/01 11:52:51 Registering 3 clients
2023/05/01 11:52:51 Registering 3 informer factories
2023/05/01 11:52:51 Registering 4 informers
Check the status of the Knative Resources
$ kubectl describe -n <namespace> kservice
$ kubectl describe -n <namespace> config
$ kubectl describe -n <namespace> revision
$ kubectl describe -n <namespace> sks # Serverless Service
$ kubectl describe -n <namespace> kingress # Knative Ingress
$ kubectl describe -n <namespace> rt # Knative Route
$ kubectl describe -n <namespace> dm # Domain-Mapping
Check the status at the end. For example
Knative Serving
$ kubectl describe -n default kservice
... omitted ...
Status:
Address:
URL: http://hello.default.svc.cluster.local
Conditions:
Last Transition Time: 2023-05-01T12:08:18Z
Status: True
Type: ConfigurationsReady
Last Transition Time: 2023-05-01T12:08:18Z
Status: True
Type: Ready
Last Transition Time: 2023-05-01T12:08:18Z
Status: True
Type: RoutesReady
Latest Created Revision Name: hello-00001
Latest Ready Revision Name: hello-00001
Observed Generation: 1
Traffic:
Latest Revision: true
Percent: 100
Revision Name: hello-00001
URL: http://hello.default.10.89.0.200.sslip.io
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 45s service-controller Created Configuration "hello"
Normal Created 45s service-controller Created Route "hello"
Knative Eventing
$ kubectl describe -n <namespace> brokers
$ kubectl describe -n <namespace> eventtypes
$ kubectl describe -n <namespace> triggers
$ kubectl describe -n <namespace> channels
$ kubectl describe -n <namespace> subscriptions
$ kubectl describe -n <namespace> apiserversources
$ kubectl describe -n <namespace> containersources
$ kubectl describe -n <namespace> pingsources
$ kubectl describe -n <namespace> sinkbindings
Check the status at the end. For example
$ kubectl describe -n default brokers
... omitted ...
Status:
Annotations:
bootstrap.servers: my-cluster-kafka-bootstrap.kafka:9092
default.topic.partitions: 10
default.topic.replication.factor: 3
KServe Debugging Guide
- https://kserve.github.io/website/developer/debug/
Debugging application issues
- https://knative.dev/docs/serving/troubleshooting/debugging-application-issues/
## How to get your minikube IP and the port that the istio-ingressgateway is listening on. | |
$ minikube ip | |
$ kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}' | |
## How to Get the Host header information of your knative service With the default settings, Knative will generate an accessible URL for your service, hence the example.com domain name. The naming convention can be customized to your liking, along with other settings, but that’s beyond the scope of what we are trying to do. | |
$ kubectl get ksvc hello --output=custom-columns=NAME:.metadata.name,URL:.status.url | |
## How to create use curl with the host header to test the route. The first request may take a little bit longer than you would expect. | |
curl -H "Host: <your URL without http://>" http://<minikube IP>:<istio-ingressgatway PORT> | |
or | |
curl -H "Host: hello.default.svc.cluster.local" http://10.107.112.173 -v -i | |
## How to check knative autoscaler have access to API resources | |
- the autoscaler is mounting the correct token for the controller SA | |
- the service account has permissions to access resources that the log file says are failing, for example | |
$ kubectl -n knative-serving --as=system:serviceaccount:knative-serving:controller auth can-i get configmaps | |
$ kubectl get ksvc | |
$ kubectl get routes | |
$ kubectl describe ksvc hello | |
$ kubectl describe revision | |
$ kubectl get deploy | |
$ kubectl describe deploy hello-00001-deployment | |
$ kubectl get deploy | |
$ kubectl get images | |
$ kubectl get ds | |
$ kubectl logs deployment/controller -n knative-serving | grep "error" | |
$ kubectl get deployment -l "serving.knative.dev/service=knative-service" --output yaml | |
$ kubectl logs deployment/controller -n knative-serving | grep "error" | less | |
$ kubectl get ksvc | |
$ kubectl describe ksvc knative-service | |
$ kubectl describe ksvc hello | |
$ kubectl describe deployment -l "serving.knative.dev/service=knative-service" | |
$ kubectl describe deployment -l "serving.knative.dev/service=hello" |
Check Istio-Ingressgateway Pod Status:
# Check Istio-Ingressgateway Pod Status: | |
$ kubectl get pods -n istio-system | |
### Check Ksvc Pod Status: | |
### Ensure that the ksvc pods are running and are in a READY state. | |
$ kubectl get ksvc | |
### Check Istio-Ingressgateway Logs: | |
### Check the logs of the istio-ingressgateway pods to see if there are any errors. | |
$ kubectl logs -n istio-system -l app=istio-ingressgateway | |
### Examine VirtualService: | |
### Verify that the VirtualService created for the Knative Service is correctly configured. This should be pointing to the correct service and port. | |
$ kubectl get virtualservice | |
$ kubectl describe virtualservice <name> | |
### Check Knative Serving Controller Logs: | |
$ Knative Serving Controller logs might have important messages about the service's state. | |
$ kubectl logs -n knative-serving <controller-pod-name> |
Knative Serving Troubleshooting: Autoscaler Component
# Multiple Autoscaler Instances** | |
# Are there multiple replicas of the autoscaler running? If there's only supposed to be one but multiple are running, that could lead to these conflicts. | |
$ kubectl get pods -n knative-serving -l app=autoscaler | |
# Lease Object Inspection: | |
# Check the status and metadata of the lease object. | |
$ kubectl get lease autoscaler-bucket-00-of-01 -n knative-serving -o yaml | |
# Review Autoscaler Configuration: | |
# Look at the ConfigMap config-autoscaler in the knative-serving namespace. Check for any misconfigurations. | |
$ kubectl describe configmap config-autoscaler -n knative-serving | |
# Multiple Autoscalers: First, verify that there aren't multiple instances of the autoscaler running unintentionally. This can sometimes occur in misconfigured deployments. | |
$ kubectl get pods -n knative-serving -l app=autoscaler | |
# Check Lease Status: View the details of the lease object to check its status. | |
$ kubectl describe leases.coordination.k8s.io autoscaler-bucket-00-of-01 -n knative-serving | |
# Restart Autoscaler: Sometimes, simply restarting the problematic component can resolve the issue. | |
$ kubectl delete pods -n knative-serving -l app=autoscaler | |
# Review Autoscaler Configuration: | |
Look at the ConfigMap config-autoscaler in the knative-serving namespace. Check for any misconfigurations. | |
$ kubectl describe configmap config-autoscaler -n knative-serving | |
# Check Knative Autoscaler Logs: | |
The autoscaler component is responsible for scaling. Review its logs for any issues related to metric collection: | |
$ kubectl logs -n knative-serving -l app=autoscaler |
Reference
- https://knative.dev/docs/install/troubleshooting/
- https://knative.dev/docs/serving/troubleshooting/debugging-application-issues/
- https://knative.dev/docs/eventing/troubleshooting/
- https://knative.dev/docs/serving/troubleshooting/debugging-application-issues/
- https://github.com/dewitt/knative-docs/blob/master/serving/debugging-application-issues.md
- https://github.com/dewitt/knative-docs/blob/master/serving/debugging-performance-issues.md
- https://github.com/dewitt/knative-docs/blob/master/serving/accessing-logs.md
- https://github.com/dewitt/knative-docs/blob/master/serving/accessing-metrics.md
- https://github.com/dewitt/knative-docs/blob/master/serving/accessing-traces.md
- https://github.com/dewitt/knative-docs/blob/master/serving/debugging-application-issues.md
- https://github.com/dewitt/knative-docs/blob/master/serving/debugging-performance-issues.md
- https://github.com/dewitt/knative-docs/blob/master/serving/gke-assigning-static-ip-address.md
- https://github.com/dewitt/knative-docs/blob/master/serving/installing-logging-metrics-traces.md
- https://github.com/dewitt/knative-docs/blob/master/serving/outbound-network-access.md
- https://github.com/dewitt/knative-docs/blob/master/serving/setting-up-a-logging-plugin.md
- https://github.com/dewitt/knative-docs/blob/master/serving/using-a-custom-domain.md
- https://github.com/dewitt/knative-docs/blob/master/serving/using-an-ssl-cert.md
- https://github.com/dewitt/knative-docs/blob/master/serving/using-cert-manager-on-gcp.md
- https://github.com/dewitt/knative-docs/blob/master/serving/using-external-dns.md
- https://github.com/knative/serving/issues
- https://github.com/knative/eventing/issues
I’m a DevOps/SRE/DevSecOps/Cloud Expert passionate about sharing knowledge and experiences. I am working at Cotocus. I blog tech insights at DevOps School, travel stories at Holiday Landmark, stock market tips at Stocks Mantra, health and fitness guidance at My Medic Plus, product reviews at I reviewed , and SEO strategies at Wizbrand.
Please find my social handles as below;
Rajesh Kumar Personal Website
Rajesh Kumar at YOUTUBE
Rajesh Kumar at INSTAGRAM
Rajesh Kumar at X
Rajesh Kumar at FACEBOOK
Rajesh Kumar at LINKEDIN
Rajesh Kumar at PINTEREST
Rajesh Kumar at QUORA
Rajesh Kumar at WIZBRAND