Easier Kubernetes debugging with Okteto
We recently published a survey to help us better understand how developers are using Kubernetes in their day-to-day workflows. One of the questions was about what developers struggle the most with when developing in Kubernetes. Not surprisingly, the top answer so far has been Finding the right logs if my application fails to run.
Okteto makes Kubernetes development simpler. Based on your feedback, we updated Okteto Cloud to display your application's state and error conditions directly in the UI. No more scratching your head trying to figure out what’s going on with your application.
On this blog post, we'll explain why developing Kubernetes Manifests can be hard and how Okteto’s error reporting features help you be more productive when developing Kubernetes applications.
The Problem
Kubernetes is an open-source project for automating deployment, scaling, and management of containers. It is designed for DevOps engineers looking for an infrastructure automation solution.
Kubernetes is not designed for developers, and debugging deployment issues can be pretty intimidating. The official Kubernetes documentation spends a fair amount of time on this topic:
- Troubleshoot Applications
- Application Introspection and Debugging
- Debug Init Containers
- Determine the Reason for Pod Failure
- Debug Pods and Replication Controllers
You need to run a lot of kubectl commands each time your app misbehaves to truly understand what’s going on. And this only gets harder the more microservices you have. This introduces tremendous friction for even the most basic errors such as:
- Typing the wrong docker image
- Start command execution errors
- Referring the wrong secret, the wrong config map or the wrong volume
- Exhausting your namespace quotas
- Facing pod policy security permissions
- Missing image pull secrets
- Label selector mismatches
Example
Let me illustrate this point with a simple sample. Create a file named deployment.yml
with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: okteto/hello-world:golang
name: hello-world
and deploy it to Kubernetes:
$ kubectl apply -f deployment.yaml
deployment.apps/hello-world created
After a few seconds, your application will be running. You can validate it by executing:
$ kubectl get deployment.apps/hello-world -oyaml
...
conditions:
...
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
...
This is the cool part of developing in Kubernetes. Type one command and your application just runs 😎.
Imagine that you need to iterate on your Kubernetes Manifest and you introduce a bug. Edit the deployment.yml
file and update the image
field with an image tag that doesn't exist:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 2
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: okteto/hello-world:wrong
name: hello-world
Deploy the Kubernetes application again by executing:
$ kubectl apply -f deployment.yaml
deployment.apps/hello-world configured
There is no error shown. Surprised? Kubernetes is a declarative orchestrator, it creates your deployment as the desired state of your application and it will retry to deploy it forever. If the okteto/hello-world:wrong
image eventually gets created, the deployment will then progress and become available. Kubernetes is designed for distributed production environments, and being declarative is very useful when dealing with transient errors.
But I am developing, not in a production environment. This is a legitimate error, not something transient caused by bad timing or maybe networking issues. okteto/hello-world:wrong
is never going to be created. It would be far easier if I could get a clear error indicating that I simply typed the wrong image.
Things get even more confusing. Check the state of your deployment:
$ kubectl get deployment.apps/hello-world -oyaml
...
conditions:
...
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
...
Your deployment is still available. This is due to Kubernetes’ rolling upgrades capabilities. Pods created by your first deployment definition are not killed until the new pods are running and available. Rolling upgrades make a lot of sense in production environments. You can validate this point by executing:
$ kubectl get pods
hello-world-55d6c7dbfb-nj2sn 0/1 ImagePullBackOff 0 26s
hello-world-79bb6b95c9-ff6ft 1/1 Running 0 18m
hello-world-79bb6b95c9-c19d2 1/1 Running 0 18m
But rolling upgrades are not that useful when I’m developing. If I’m not careful, I’ll think that my deployment definition is fine and commit my changes. After all, my application is still available. I can even query for logs and run requests against it!
$ kubectl logs -f deployment.apps/hello-world
Starting hello-world server...
Received request
Received request
...
The Solution
Thanks to your input in our Kubernetes development survey, we are introducing states and error conditions as first class citizens in our product. If there is something wrong with your Kubernetes application, the Okteto UI will show you a clear error and recommendations on how to fix it.
Stop running dozens of kubectl commands and check your application state in a single place:
Head over to Okteto Cloud and give it a go. Okteto Cloud is a development platform for Kubernetes applications. Sign up today to get a free developer account with 4 CPUs and 8GB of RAM.