Okteto Blog

Kubernetes for Developers

An Early Look At Helm 3

The first beta of Helm 3 is now available! This is a particularly important milestone because it signals the finalization of the big helm rewrite. From now on, the Helm team’s focus will be in bug fixes and stability. Which means that we can start to build charts targeting Helm 3, right?

I really wanted to try Helm 3 around, but didn’t want to mess my local machine (the Helm and Helm 3 binaries are not compatible, so you need to keep separate installs, $HELM_HOMEand whatnots), so instead of testing it in my machine, I decided to launch a development environment in Okteto Cloud and test everything from there.

Launch your development environment in Okteto Cloud

If this is your first time using Okteto, start by installing the Okteto CLI. We’ll need it to launch the development environment into Okteto Cloud.

We’ll start by initializing our development environment, using the okteto init command to create our manifest (this tells Okteto what kind of development environment to launch). Since we are working on an empty directory, okteto will ask us to pick a runtime. Pick the first one.

1
2
3
$ mkdir helm3
$ cd helm3
$ okteto init

Now that we have our development environment defined, we need to configure our local environment to work with Okteto Cloud.

First, run the okteto login command to create your Okteto Cloud account and link it to your local computer (you only need to do this once per computer).

1
2
3
$ okteto login 
✓ Logged in as rberrelleza
Run `okteto namespace` to activate your download your Kubernetes credeentials.

Second, run the okteto namespace command to download the Kubernetes credentials for Okteto Cloud, and to set it as our current context.

1
2
$ okteto namespace 
✓ Updated context 'cloud_okteto_com' in '/Users/ramiro/.kube/config'

Now we are ready to go! Run the okteto up command to launch our development environment directly into Okteto Cloud:

1
2
3
4
5
6
7
8
9
10
11
$ okteto up

Deployment okteto-helm3 doesn't exist in namespace rberrelleza. Do you want to create a new one? [y/n]: y
✓ Persistent volume provisioned
✓ Files synchronized
✓ Okteto Environment activated
Namespace: rberrelleza
Name: okteto-helm3

Welcome to your development environment. Happy coding!
okteto>

The okteto up command launches a development environment in Okteto Cloud, keeps your code synchronized between your development environment and your local machine and automatically opens a shell into the development environment for you. From now on we’ll be running all the commands directly in our remote development environment (note the okteto> bash symbol in the code samples 😎).

Install Helm 3 in the development environment

Download the v3.0.0-beta.1 release from github, and install it in /usr/bin/local.

1
2
3
4
okteto> wget https://get.helm.sh/helm-v3.0.0-beta.1-linux-amd64.tar.gz -O /tmp/helm-v3.0.0-beta.1-linux-amd64.tar.gz
okteto> tar -xvzf /tmp/helm-v3.0.0-beta.1-linux-amd64.tar.gz -C /tmp
okteto> mv /tmp/linux-amd64/helm /usr/local/bin/helm
okteto> chmod +x /usr/local/bin/helm

Run helm version to make sure everything is OK (We are dealing with beta software, after all).

1
2
okteto> helm version
version.BuildInfo{Version:"v3.0.0-beta.1", GitCommit:"f76b5f21adb53a85de8925f4a9d4f9bd99f185b5", GitTreeState:"clean", GoVersion:"go1.12.9"}

High level review

The biggest change introduced by Helm 3 (in my opinion) is that Tiller is gone. This makes Helm a lot simpler to use, since the Helm commands now run under the credentials you’re using instead of those of an intermediate service. This has huge implications, specially if you’re working on a shared or multi-tenant cluster. This is particularly exciting for us, since it allow us to fully support Helm as a 1st class citizen in Okteto Cloud.

Helm 3 will also bring a more powerfulcChart model, that’s going to leverage Open Container Images (OCI) and registries for distribution. I imagine it’s going to be something similar (or complementary?) to the work done in the CNAB project. Sadly, installing 3rd party charts is broken on this beta, so I didn’t get a chance to try that out.

What I did got a chance to try out was the full lifecycle of a local chart. As you can expect, it’s pretty much the same as Helm v2, but without the Tiller-related complexities.

Deploying our first chart

To keep exploring Helm 3, we are going to create a simple chart by running helm create. This command creates a chart with a deployment of an NGINX container and its corresponding service.

1
2
okteto> helm create hello-world
Creating hello-world

Deploy your chart by running the helm install command:

1
2
3
4
5
6
okteto> helm install hello-world ./hello-world
NAME: hello-world
LAST DEPLOYED: 2019-08-29 23:01:17.851466604 +0000 UTC m=+0.128796294
NAMESPACE: rberrelleza
STATUS: deployed
....

You can then run helm list to see all the installed releases:

1
2
3
okteto> helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART
hello-world rberrelleza 1 2019-08-29 23:06:40.982957007 +0000 UTC deployed hello-world-0.1.0

When you launch a development environment in Okteto Cloud, a set of credentials is automatically created for you. The credentials are automatically mounted on your development environment, so you can start using tools like helm or kubectl without requiring extra configuration.

Another big change that’s being introduced with Helm 3 is that now every release has it’s own secret, clearly tagged with the name of the release and the version, and stored in the same namespace (no more searching across namespaces to find why a certain value was applied!). This makes rolling back versions easier, since Helm only needs to apply the contents of the secret, instead of having Tiller do complicated calculations.

You can see the content by running the kubectl get secret command:

1
okteto> kubectl get secret hello-world.v1 -oyaml

I couldn’t figure out how to decode/decrypt the content of the secret, I’ll update blog post once I do.

Now that our application is installed and ready, let’s try it out. Instead of having to run a port-forward to our local machine, we will let Okteto Cloud automatically create a publicly accessible SSL endpoint for our application by annotating the service with dev.okteto.com/auto-ingress=true. The chart created by helm create doesn’t support annotations, so we’ll just use kubectl annotate directly:

1
2
okteto> kubectl annotate service hello-world dev.okteto.com/auto-ingress=true
service/hello-world annotated

Let’s open our browser and head out to Okteto Cloud to see the application’s endpoint.

Go ahead and click on the URL to see our application up and running.

Upgrading the chart

Let’s change the boring “Welcome to nginx page!” with something with more flair. We’ll upgrade our chart and change the container image from nginx to ramiro/hello using the helm upgrade command.

1
okteto> helm upgrade --set image.repository=ramiro/hello hello-world ./hello-world

Run helm list to see the state of the release. Notice how the value of revision changed from 1 to 2 to indicate that a new version was deployed.

1
2
3
okteto> helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART
hello-world rberrelleza 2 2019-08-30 00:05:57.204015932 +0000 UTC deployed hello-world-0.1.0

If you run the kubectl get secret command you’ll also see that a new secret was created for our new release, along with the older one (for rollback purposes).

1
2
3
4
5
6
okteto> kubectl get secret
NAME TYPE DATA AGE
...
hello-world.v1 helm.sh/release 1 10m
hello-world.v2 helm.sh/release 1 4m22s
...

Go back to your browser, reload the page, and verify that it was correctly upgraded 🐶.

Rolling back the chart

Now let’s simulate that we have a bad build. We’ll rollback to the previous stable version by using the helm rollback command.

1
2
> helm rollback hello-world 1
Rollback was a success! Happy Helming!
1
2
3
okteto> helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART
hello-world website-rberrelleza 3 2019-09-04 01:50:17.464392339 +0000 UTC deployed hello-world-0.1.0

If we go back to our browser and reload the page, we should see NGINX’s welcome page once again. You can rollback to any specific revision, as long as the secret is still available.

Cleanup

Use the helm uninstall command to uninstall the chart and remove all the related resources.

1
2
okteto> helm uninstall hello-world
release "hello-world" uninstalled

Once you’re done playing with your development environment, exit the terminal and run the okteto down command to shut it down. But don’t worry, all the files you created there (like the chart) were automatically synchronized back to your local machine.

Conclusion

I’m really excited about Helm 3. The team managed to keep all the good things about it (repeatable installations, manifest-driven approach, easy to share charts, same commands) while removing the need to have a central service to keep all the state (buh bye Tiller!). I’m particularly curious to try the new chart model, as well as the helm test command (In my experience testing charts on Helm 2 is pretty much impossible). Beta v2 is already in prerelease, so we should have more information on all of this pretty soon!

Ephemeral development environments are a great way to keep different tech stacks from messing with each other, or to quickly try out beta software without “polluting” our machine. Making them super easy to use for everyone is one of our main motivations with building Okteto.

I would ❤️ to hear what you think about this feature.

How to develop a serverless app with OpenFaaS and Okteto

OpenFaaS (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes.

OpenFaaS simplifies your application by helping you package your application logic in discrete packages that react to web events. Instead of having to deploy tens of pods to keep your application running at scale, OpenFaaS scales your functions automatically and independently based on web events and metrics.

On this blog post we’ll show you how to deploy your own instance of OpenFaaS, launch your first function and how to develop it. Then we’ll show you how you can use the Okteto CLI to accelerate your serverless development even more.

Deploy OpenFaaS in Okteto Cloud

If you already have your own installation of OpenFaaS, feel free to skip to the next step.

For this post, we’ll deploy OpenFaaS into Okteto Cloud. Okteto Cloud is a self-service, multi-tenant Kubernetes cluster optimized for team collaboration and Cloud Native development . You’ll need to install the Okteto CLI to be able to interact with Okteto Cloud. Make sure that you install version 1.2.3 or newer.

1
2
$ okteto version
okteto version 1.2.3

The Okteto CLI is an open source project that enables you to develop directly in your Kubernetes cluster. It’s also used to interact with Okteto Cloud.

Once the CLI is installed, run the okteto login command. The command is used to link your instance of the Okteto CLI with your Okteto Cloud account. If this is the first time you login, an account will be created for you.

1
2
3
4
$ okteto login
Authentication will continue in your default browser
✓ Logged in as rberrelleza
Run `okteto namespace` to activate your Kubernetes configuration.

When you create your account, a namespace is automatically created for you. Run the okteto namespace command to activate it.

1
2
$ okteto namespace
✓ Updated context 'cloud_okteto_com' in '/Users/ramiro/.kube/config'

The okteto namespace command downloads your Kubernetes credentials from Okteto Cloud, adds them to your kubeconfig file, and sets it as the current context. Once you do this, you will have full access to your Kubernetes namespace with any Kubernetes tool.

The OpenFaaS gateway is configured with basic authentication. Run the command below to generate a random password and create the Kubernetes secret that the gateway needs. We’ll also save it to a local file for future access.

1
2
3
4
5
6
7
8
9
# generate a random password
$ PASSWORD=$(head -c 12 /dev/urandom | shasum | cut -d' ' -f1)

$ kubectl create secret generic basic-auth \
--from-literal=basic-auth-user=admin \
--from-literal=basic-auth-password="$PASSWORD"

# We'll needed this later to login
$ echo $PASSWORD > password.txt

Now deploy OpenFaaS by running the command below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ kubectl apply -f https://raw.githubusercontent.com/okteto/samples/master/openfaas/openfaas.yml

configmap/alertmanager-config created
configmap/prometheus-config created
service/alertmanager created
service/basic-auth-plugin created
service/gateway created
service/nats created
service/prometheus created
deployment.apps/alertmanager created
deployment.apps/basic-auth-plugin created
deployment.apps/faas-idler created
deployment.apps/gateway created
deployment.apps/nats created
deployment.apps/prometheus created
deployment.apps/queue-worker created

For this post, we’ll be using a “developer” installation of OpenFaaS that we wrote. For production scenarios, we recommend that you follow the official documentation.

OpenFaaS includes a web gateway as part of the deployment, which can be used to see your functions, create new ones, and to invoke them. Okteto Cloud automatically created an ingress for your OpenFaaS gateway. Open Okteto Cloud in your browser and click on the gateway URL to access it.

s URL

When opening the gateway the first time you will be prompted for credentials. The user is admin and the password will be the content of the password.txt file we created before.

Install the OpenFaaS CLI

We need the OpenFaaS CLI available locally to deploy a function. If you are in Mac or Linux, run the command below to install it:

1
$ curl -sL cli.openfaas.com | sudo sh

On Windows download the latest faas-cli.exe from the releases page and place it somewhere in you $PATH.

Validate that it was installed correctly by opening a terminal and running:

1
2
$ faas-cli help
$ faas-cli version

To access the OpenFaaS gateway from the CLI, you need to export the $OPENFAAS_URL environment variable (you can get your gateway’s URL from Okteto Cloud’s) and login using the faas-cli login command:

1
2
3
4
5
export OPENFAAS_URL=$OPENFAAS_GATEWAY_URL
$ cat password.txt | faas-cli login -u admin --password-stdin

Calling the OpenFaaS server to validate the credentials...
credentials saved for admin https://gateway-rberrelleza.cloud.okteto.net

Deploy your first function

Now that we have our instance of OpenFaaS, it’s time to deploy our first function.

OpenFaaS supports pretty much any programming language, but since I’m a huge golang fan, we’ll use that for this post. Use the faas-cli new command to create all the necessary files.

1
2
3
4
5
6
7
8
9
10
$ faas-cli new -lang go gohash
...
___ _____ ____
/ _ \ _ __ ___ _ __ | ___|_ _ __ _/ ___|
| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` ___ \
| |_| | |_) | __/ | | | _| (_| | (_| |___) |
___/| .__/ ___|_| |_|_| __,_|__,_|____/
|_|Function created in folder: gohash
Stack file written: gohash.yml
...

We now have a folder called gohash with a file called handler.go, this is the main code of our function. We also have a file called gohash.yml with the function metadata, and a folder called template with the different function templates available.

Open gohash.yml with your favorite text editor and put your Docker Hub account into the image: section. i.e. image: okteto/gohash:latest.

Then, run the faas-cli up command to build, push and deploy your function.

The faas-cli up command uses your local Docker client to build and push the images. It requires you to be logged to Docker Hub or a similar registry.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ faas-cli up -f gohash.yml
[0] > Building gohash.
...
[0] < Building gohash done.
[0] worker done.

[0] > Pushing gohash [ramiro/gohash:latest].
...
[0] < Pushing gohash [ramiro/gohash:latest] done.
[0] worker done.

Deploying: gohash.

Deployed. 202 Accepted.
URL: https://gateway-openfaas-rberrelleza.cloud.okteto.net/function/gohash

Once your function has been deployed, we’ll use the gateway’s UI to invoke it. Open the gateway in your browser, click on the gohashon the left, type testas the Request body, and click the _Invoke _button.

Develop your function

Once our function is up and running, let’s add a simple feature. Instead of just echoing back the input, we’ll calculate the checksum of the string and return that instead.

Open gohash/handler.go with your favorite editor, and update it:

1
2
3
4
5
6
7
8
9
10
11
12
package function

import (
"crypto/sha256"
"fmt"
)

// Handle a serverless request
func Handle(req []byte) string {
s := sha256.Sum256(req)
return fmt.Sprintf("Hello, Go. You said: %x", s)
}

Save the file, and build , push and deploy your function with the faas-cli up command.

1
2
3
4
5
$ faas-cli up -f gohash.yml
...
...
Deployed. 202 Accepted.
URL: https://gateway-rberrelleza.cloud.okteto.net/function/gohash

Wait 30 seconds, go back to the gateway’s UI and invoke the function again to verify your code change:

You’ll need to edit the handler.go file and run faas-cli up every time you want to deploy a new change.

Develop your function, Cloud Native style

OpenFaaS offers you a great experience when building functions (way better and AWS Lambda and GCP’s Cloud Functions, in my opinion), but having to build, push and redeploy every time you want to change a line of code adds a lot of friction to my inner loop.

What if instead of building, pushing and deploying, we take advantage of Okteto and develop our function directly in the cluster, Cloud Native style?

Okteto uses a manifest to know how to build and deploy your development environment. Create an file called gohash/okteto.yml, with the following content:

1
2
3
4
5
6
7
8
9
10
11
name: go-dev
image: okteto/openfaas:golang
mountpath: /go/src/handler/function
command: ["bash"]
securityContext:
fsGroup: 100
services:
- name: gohash
mountpath: /home/app/src
environment:
- fprocess=src/handler

Go to your terminal, navigate to the gohash folder and run the okteto up command to start your remote development environment:

1
2
3
4
5
6
7
8
9
10
11
12
$ cd gohash
$ okteto up

Deployment 'go-dev' doesn't exist. Do you want to create a new one? [y/n]: y
✓ Persistent volume provisioned
✓ Files synchronized
✓ Okteto Environment activated
Namespace: rberrelleza
Name: go-dev

Welcome to your development environment. Happy coding!
okteto>

Build and install the binary in your remote environment:

1
okteto> go install

If you want to use go build instead, call it like this: go build -o /go/src/handler/function for OpenFaaS to pick it up the changes.

Go back to the browser, and test the function again:

Now, let’s implement a second feature: We’ll include a timestamp in the response. Open gohash/handler.go.go with your favorite editor and update it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
package function

import (
"crypto/sha256"
"fmt"
"time"
)

// Handle a serverless request
func Handle(req []byte) string {
s := sha256.Sum256(req)
now := time.Now().Format("2006-01-02T15:04:05")
return fmt.Sprintf("[%s] Hello, Go. You said: %x", now, s)
}

Save your file, go back to your terminal, and build your function again.

1
okteto> go install

Go back to your browser, and test your function again:

Our change made it to the function! How did this happen? With Okteto your changes were automatically applied to the remote containers as soon as you saved them. This way you can execute native go builds, leveraging golang’s caches and incremental builds to see your changes in seconds. No commit, build, push or redeploy required 💪!

Once you’re done developing, run okteto down to return everything to the previous state.

Whoa, how does it works?

When you run okteto up, Okteto enables what we call development mode. Development Mode is what enables you to test your changes directly in the cluster, instead of having to build, push and redeploy.

After running okteto up the following events happen:

  1. Okteto creates a persistent volume on Okteto Cloud.
  2. A bi-directional synchronization service is started between your local machine and the persistent volume.
  3. Okteto launches your development environment, using the image you defined in your okteto.yml, with your persistent volume mounted. The deployment will use the image defined in your manifest. In this case, we are using okteto/openfaas:golang, an image that has all the tools you need to develop golang-based functions already preinstalled.
  4. Okteto relaunches all the deployments defined in the services section of your manifest (in this case, your function). The deployment is slightly modified to have your persistent volume mounted in mountpath, and the environment variable fprocess injected.
  5. A shell was opened into your development environment.

Every time you change a file (e.g when you add the timestamp code), the code is synchronized between your machine and your remote environment. And every time you compile your go binary, the updated version is available both in your development environment and in your function’s deployment (since they both share the same volume).

This is the ‘magic’ that allows you to validate your changes directly in the cluster, no commit, build, push or redeploy required. Join us on slack to talk more about Okteto’s features, architecture, and use cases!

Conclusions

We just built and deployed our first function in OpenFaaS in minutes.

OpenFaaS makes it simple to turn anything into a serverless function that runs on Linux or Windows through Kubernetes. Learn more about OpenFaaS here.

And then, we used Okteto to show you the advantages of developing directly in Kubernetes while keeping the same developer experience than working on a local machine. By developing directly in the cluster you not only gain speed, you also avoid the burden of having to keep Kubernetes and OpenFaaS running in your local machine.

Working on the Cloud is always better. You don’t work on your spreadsheets and listen to media files locally, do you? Stop dealing with local environments and become a Cloud Native Developer today!

Interested in boosting your team’s Kubernetes development workflows? Contact us to start running Okteto Enterprise in your own infrastructure today.

Develop a Django + Celery app in Kubernetes

Django + Celery is probably the most popular solution to develop websites that require running tasks in the background. Developing a Django + Celery app locally is complex, as you need to run different services: Django, Celery worker, Celery beat, Redis, databases… docker-compose is a very convenient tool in this case. You can spin up your local environment with docker-compose in just one single command. And thanks to the use of volume mounts, you are able to hot reload your application in seconds.

In this blog post, we want to go a step forward and explain why you should develop your Django + Celery app directly in Kubernetes. The benefits are:

  • Reduce integration issues by developing in a more production-like environment, consuming Kubernetes manifests, secrets, volumes or config maps from development.
  • Overcome local development limitations. You will be able to develop against any Kubernetes cluster, local or remote. And having a lot of microservices makes it harder and harder to run the entire development environment locally.

But it is well-known that developing in Kubernetes is tedious. Let’s explore together how to develop in Kubernetes the Cloud Native way 💥💥💥💥.

Deploy the Django + Celery Sample App

Get a local version of the Django + Celery Sample App by executing the following commands in your local terminal:

1
2
$ git clone https://github.com/okteto/samples
$ cd samples/django

The Django + Celery Sample App is a multi-service application that calculates math operations in the background. It consists of a web view, a worker, a queue, a cache, and a database.

Execute the command below to deploy the application in your Kubernetes cluster:

1
2
3
4
5
6
7
8
9
10
$ kubectl apply -f manifests
statefulset.apps "cache" created
service "cache" created
statefulset.apps "db" created
service "db" created
statefulset.apps "queue" created
service "queue" created
deployment.apps "web" created
service "web" created
deployment.apps "worker" created

Wait for a few seconds for the app to be ready. Check that all pods are ready by executing the command below:

1
2
3
4
5
6
7
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
cache-0 1/1 Running 0 2m
db-0 1/1 Running 0 2m
queue-0 1/1 Running 0 2m
web-7bccc4bc99-2nwtc 1/1 Running 0 2m
worker-654d7b8bd5-42rq2 1/1 Running 0 2m

Efficient Kubernetes Development with Okteto

Now that we have the Django + Celery Sample App running in a Kubernetes cluster, we can use it as our development environment. When working in Kubernetes, you would have to rebuild your docker images, push them to a registry and redeploy your application every time you want to test a code change. This cycle is complex and time-consuming. It sounds like a bad idea to kill your productivity this way.

If you are familiar with docker-compose, you know how useful is to mount local folders into your containers to avoid the docker build/push/redeploy cycle. Volume mounts are a game changer for developing Docker applications and they are the missing piece to speed up your Kubernetes development cycle.

We developed Okteto to solve this problem. To give you all the good parts of using docker-compose when developing while moving your development into your Kubernetes cluster. Okteto is open source, and the code is available in github. Feel free to check it out, contribute, and star it 🤗!

To install the Okteto CLI in your computer, follow the installation instructions and check that it is properly installed by running:

1
2
$ okteto version
okteto version 1.1.1

In order to start developing the Django + Celery Sample App, execute:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ okteto up
✓ Files synchronized
✓ Okteto Environment activated
Namespace: pchico83
Name: web
Forward: 8080 -> 8080

curl: (52) Empty reply from server
Database is ready
No changes detected in app ‘myproject’
Created migrations
Operations to perform:
Apply all migrations: auth, contenttypes, myproject, sites
Running migrations:
No migrations to apply.
Migrated DB to latest version
Performing system checks…
System check identified no issues (0 silenced).
June 26, 2019–11:00:02
Django version 1.11.21, using settings ‘myproject.settings’
Starting development server at http://0.0.0.0:8080/
Quit the server with CONTROL-C.

Let’s have a look at the okteto.yml file to understand what the okteto up command does:

1
2
3
4
5
6
7
8
name: web
command: [“./run_web.sh”]
mountpath: /app
forward:
- 8080:8080
services:
- name: worker
mountpath: /app

okteto up mounts your local folder into the remote containers of the deployments web and worker. The local folder is mounted in the remote path /app. Also, the port 8080 is automatically forwarded between your development environment and your computer. If you want to know more about how Okteto works, follow this link.

Let’s Write some Code and Fix a Bug

Verify that the application is up and running by opening your browser and navigating to http://localhost:8080/jobs/. Go ahead and calculate the fibonacci for the number 5:

Press the POST button to submit the operation. The response payload will include the url of the job. Go to http://localhost:8080/jobs/1/ and you will notice that the result is wrong (hint: the fibonacci number of 5 is not 32). This is because our worker has a bug 🙀!

Typically, fixing this would involve you running the app locally, fixing the bug, building a new container, pushing it and redeploying your app. Instead, we’re going to do it the Cloud Native way .

Open myproject/myproject/models.py in your favorite local IDE. Take a look at the value of the task variable in line 29. It looks like someone hard-coded the name of the operation instead of reading it from the job. Let’s fix it by changing it to self.type , as shown below:

1
task = TASK_MAPPING[‘power’]

to

1
task = TASK_MAPPING[self.type]

Save your file and go back to http://localhost:8080/jobs/. Submit a new fibonacci calculation for the same values as before. Go to http://localhost:8080/jobs/2/ and verify the result. The result looks correct this time, success!

How did this happen? With Okteto your changes were automatically applied to the remote containers as soon as you saved them. No commit, build, push or redeploy required 💪!

Okteto can be used in local Kubernetes installations, but why would you limit yourself to develop locally when you can develop at the speed of the cloud? Developing in the cloud have several advantages, among others:

  • The cloud offers faster hardware.
  • Your company might deploy a few services to make them available to every development environment, like a Kafka instance or a company Identity Service.
  • Consume infrastructure services like Elasticsearch or Prometheus from development to boost debugging.
  • Share your development environment endpoints with the rest of your team for fast validation.

Every developer should work in an isolated namespace, sharing the same Kubernetes cluster. Okteto Enterprise takes care of setting Roles, Network Policies, Pod Security Policies, Quotas, Limit Ranges and all the other tedious work needed to provide controlled access by several developers to the same Kubernetes cluster. If you want to give it a try, follow our getting started guide.

Cleanup

Cancel the okteto up command by pressing Ctrl + C and run the following commands to remove the resources created by this guide:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ okteto down -v
✓ Okteto Environment deactivated

$ kubectl delete -f manifests
statefulset.apps "cache" deleted
service "cache" deleted
statefulset.apps "db" deleted
service "db" deleted
statefulset.apps "queue" deleted
service "queue" deleted
deployment.apps "web" deleted
service "web" deleted
deployment.apps "worker" deleted

Conclusions

We have shown the advantages of developing directly in Kubernetes while keeping the same developer experience than working on a local machine. Working on the Cloud is always better. You don’t work on your spreadsheets and listen to media files locally, do you? Stop dealing with local environments and become a Cloud Native Developer today!

Run Coder directly in Kubernetes

Online IDEs are becoming mainstream due to their ability to provide true one-click development environments, surpass the capabilities of developer machines and enable a new level of team collaboration. A few examples are Coder, Codeanywhere, Codenvy or AWS Cloud9.

On the other hand, Docker and Kubernetes are the de facto standard to deploy applications. Kubernetes makes easier and faster than ever to run online IDEs in the cloud. At the same time, an online IDE running inside Kubernetes might improve the Kubernetes developer experience, one of the main Kubernetes pain points.

In this blog post, we will cover this scenario using Coder, an online IDE serving Visual Studio Code, and Okteto, a tool that makes it very simple to deploy development environments in Kubernetes.

What is Coder?

Coder is an online IDE serving VS Code, compatible with the VS Code extensions you already know and love. And it is beautifully dockerized. You can try it locally by executing the command below from your project’s source code folder:

1
2
$ docker run -it -p 127.0.0.1:8443:8443 -v “${PWD}:/home/coder/project” codercom/code-server
-allow-http -no-auth

This is an easy way to try Coder, but the power of online IDEs is to have them running in the cloud, in your own Kubernetes cluster. The benefits of this approach are:

  • On-demand development environments that spin up in milliseconds.
  • Code Your development environment can consume services running in your clusters, such as logs and metrics aggregators or a Kafka instance, for example.
  • Superpass the capabilities of your development machine. Network and hardware go at the speed of the cloud.
  • Code in a production-like environment and reduce integration issues to the minimum.
  • Your development environment is in the cloud, available for anyone for fast validation or troubleshooting.

Powerful, isn’t it? Let’s see how we can automate the Coder orchestration in Kubernetes with Okteto.

What is Okteto?

Okteto is an open source tool that lets you develop directly in your Kubernetes cluster. It does the following things:

  • Mount a folder from your local filesystem to a remote container.
  • Forwards ports from the remote container to your local machine.
  • Replace the original remote container image by another one with all your dev tools available.

The source code is available here. Check it out and star it if you like it 🤗!

To install the Okteto CLI in your computer, follow the installation instructions and check that it is properly installed by running:

1
2
$ okteto version
okteto version 1.0.11

The Okteto CLI works in any Kubernetes cluster. However, to make things simpler, we are going to follow this guide using the Okteto Cloud, a free-trial Kubernetes cluster. Initialize your Okteto account by running this command:

1
2
3
4
5
$ okteto login
Authentication will continue in your default browser
Received code=68b6b0520e2bc58e00dc
Getting an access token...
✓ Logged in as pchico83

Okteto Cloud automatically takes care of setting Roles, Network Policies, Pod Security Policies, Quotas, Limit Ranges and all the other tedious work needed to provide controlled access by several developers to the same Kubernetes cluster.

Run the following command to download your Kubernetes credentials:

1
2
$ okteto namespace
✓ Updated context 'cloud_okteto_com' in '/Users/pablo/.kube/config'

You are now ready to deploy our application.

Deploy the Python Sample App

Get a local version of the Python Sample App by executing the following commands in your local terminal:

1
2
$ git clone [https://github.com/okteto/samples](https://github.com/okteto/samples)
$ cd samples/coder

You now have a functioning git repository that contains the Python Sample App, a web application that lets you place votes for your favorite animals (votes are stored in Redis). In the manifest/ directory, you also have raw Kubernetes manifests to deploy the application. To deploy the application execute:

1
2
3
4
5
$ kubectl apply -f manifests
service "redis" created
statefulset.apps "redis" created
deployment.apps "vote" created
service "vote" created

If you don’t have kubectl installed, follow this guide.

Wait for a few seconds and the Python Sample App will be ready. Open your browser and go to https://cloud.okteto.com. The application’s endpoint will be displayed on the right side of the screen.

Click on it to see your application running:

Activate your Coder environment

Cool! Now that we have the application running, we are going to use Okteto to replace the python container with our containerized development environment. The development environment will contain a fully-configured Coder instance and, thanks to Okteto, all our source code.

This way your Coder Terminal is running inside of the Kubernetes cluster. Execute the python application from your Coder Terminal, and seamlessly, it will be integrated with the HTTPS endpoint provided by Okteto. Your application can now consume every resource available in your Kubernetes cluster, including the Redis instance we deployed earlier.

And you get all that by simply executing one command:

1
2
3
4
5
6
7
8
9
10
11
12
$ okteto up 
✓ Okteto Environment activated
✓ Files synchronized
✓ Your Okteto Environment is ready
Namespace: pchico83
Name: vote
Forward: 8443 -> 8443
8080 -> 8080
...
INFO code-server development
...
INFO Connected to shared process

Go back to your browser and go to http://localhost:8443 to access your Coder IDE instance through Okteto’s secure tunnel. The IDE is already configured with your source code (freshly synchronized from your local machine by Okteto) and with all the plugins you need to develop the python app.

Now that we have our IDE ready, let’s do some development. Open app.py and modify the getOptions function so that instead of animals, you can vote between Local and Cloud development.

Save your changes, open a terminal directly in Coder and start the application by running python app.py.

Go back to the application endpoint and reload the page. Yes, your changes are instantly applied! Now go ahead and vote a few times for the Cloud option 😉.

Keep editing your files and enjoy all the Coder features and VS Code extensions, but run everything remotely in a production-like environment. No commit, build or push required 😎!

Let’s step back for a second and analyze what happened when executing okteto up. okteto up reads metadata from the okteto.yml file. Let’s have a look at the content of this file:

1
2
3
4
5
6
7
name: vote
image: okteto/coder-python:latest
command: ["dumb-init", "code-server", "--no-auth", "--allow-http"]
workdir: /home/coder/project
forward:
- 8443:8443
- 8080:8080

Based on the field name , okteto up replaces the deployment called vote by a container running my development image , okteto/coder-python:latest. This image is generated by the Dockerfile.coderfile, which installs python and pip dependencies from the base image codercom/code-server. It also installs two VS Code extensions, vscodevim.vim and ms-python.python. Note that all this is available in your development environment without you having anything installed locally, and how everything is configured directly from the repo.

The field workdir is used to indicate the path where your local folder is synchronized in the remote container. The synchronization is based on syncthing and it is bidirectional. Remote changes done in my Coder IDE will be synced back to my local filesystem. This way I can use other tools locally if I need then( e.g. a git visualizer).

Finally, the forward field indicates that the remote ports 8443 and 8080 will be forwarded to your laptop over a secure tunnel. http://localhost:8443 exposes the Coder IDE, and the URL http://localhost:8080 exposes the python application locally.

Cleanup

Cancel the okteto up command by pressing Ctrl + C and run the following commands to remove the resources created by this guide:

1
2
3
4
5
6
7
8
$ okteto down -v
✓ Okteto Environment deactivated

$ kubectl delete -f manifests
service "redis" deleted
statefulset.apps "redis" deleted
deployment.apps "vote" deleted
service "vote" deleted

Conclusions

We have shown the advantages of developing directly in Kubernetes while keeping the same developer experience than working on a local machine. Our development environment was deployed with a single command, and everything came configured directly from the repo, including python dependencies and VS Code extensions. Stop dealing with local environments and become a Cloud Native Developer today!

Interested in improving your Kubernetes and Docker development workflows? Contact Okteto to see how to install our platform in your own infrastructure.

VS Code Remote Development in Kubernetes

VS Code recently announced VS Code Remote Development, a powerful VS Code extension that allows you to take advantage of VS Code’s full feature set in the following scenarios:

  • Develop a local folder in a local container using volume mounts.
  • Develop a remote folder from a remote machine using SSH.

Development environments are getting more complex, in great part due to the broader variety of technologies being used today (e.g. polyglot apps, micro-service or third-party APIs). Instead of having to spend hours setting everything, VS Code Remote Development simplifies it by letting you use a pre-configured container as your development environment.

As teams have become more geographically distributed, a need for new collaboration models has arisen. In the middle of the Cloud Native revolution, we still develop locally. VS Code Remote Development, like Okteto, is helping us evolve towards a Cloud Native development workflow.

In this blog post, I’ll explain the advantages of developing directly in a remote container running in Kubernetes, and how to achieve the best developer experience with the combined powers of VS Code Remote Development and Okteto.

Why should you develop directly in your cluster?

There are three important benefits of developing directly in your cluster:

  • You are developing in a container: Containers give you replicable and isolated environments, without affecting your local machine.
  • You are developing remotely: Remote execution gives you access to the same OS, network speed and hardware than in production. It also enables a new level of collaboration by making your development environment available to the rest of your team for troubleshooting or early validation.
  • You are developing in Kubernetes: Develop with the same Kubernetes Manifests that in production, using the same ingress rules, secrets, config maps, volumes, service meshes or Admission Webhooks. Kubernetes will launch your development environments in seconds, scaling them vertically or horizontally to make the best usage of your resources.

Powerful, isn’t it? But what about my local development experience? As a developer, I love my debuggers, my VS Code extensions… I want to be fast, and not have to build a Docker image or call kubectl every time I want to validate my changes.

Let me show you how to keep this amazing development experience while working remotely through a sample application, the Voting App.

Deploy the Voting App to Kubernetes

Get a local version of the Voting App, by executing the following commands from your local terminal:

1
2
$ git clone [email protected]:okteto/samples.git
$ cd samples/vscode

You now have a functioning local git repository that contains:

  • A simple python 3 application. The Voting App consists of a flask app that allows you to vote for your favorite animals.
  • A multi-stage Dockerfile to generate the Voting App Docker Images.
  • A manifests folder with the YAML files to deploy the Voting App.
  • A okteto.yml to configure the behavior of Okteto.

In order to deploy the Voting App, you need to have access to a Kubernetes cluster. The easiest way to follow this demo is to deploy the Voting App using Okteto Cloud, a free multi-tenant Kubernetes service provided by Okteto.

Okteto Cloud gives you an isolated Kubernetes namespace to develop and host your applications. Okteto automatically takes care of setting Roles, Network Policies, Pod Security Policies, Quotas, Limit Range and all the other tedious work needed to provide controlled access by several developers to the same Kubernetes cluster. All you need to do is to login with your Github account and start developing.

Install the Okteto CLI to get your kubectl credentials by running the command below:

MacOS/Linux

1
$ curl https://get.okteto.com -sSfL | sh

Windows

1
$ wget https://downloads.okteto.com/cloud/cli/okteto-Windows-x86_64 -OutFile c:\windows\system32\okteto.exe

Once the CLI is installed, run the okteto login and okteto namespace commands to obtain your kubectl credentials.

1
2
$ okteto login
$ okteto namespace

VS Code uses key-based authentication to secure the communication between your local machine and your remote environment. The first thing you need to do is to upload your public SSH key to your Okteto-hosted namespace. Do it by running the command below (if you need to install kubectl, follow this link).:

1
2
$ kubectl create secret generic ssh-public --from-file=authorized_keys=$HOME/.ssh/id_rsa.pub
secret "ssh-public" created

We are almost ready. Deploy the Voting App by executing the command below:

1
2
3
$ kubectl apply -f manifests
deployment.apps "vote" created
service "vote" created

Wait for a few seconds until the application is running. Once ready, go to https://cloud.okteto.com and the Voting App will be accessible at https://vote-[githubid].cloud.okteto.net.

Develop the Voting App directly in Kubernetes

Let’s prepare your remote development environment to develop as if you were in your local machine. Start your Okteto environment by executing the following command:

1
2
3
4
5
6
$ okteto up
✓ Okteto Environment activated
✓ Files synchronized
✓ Your Okteto Environment is ready

Name: vote

The okteto up command automatically does the following tasks:

  • Replace the original Voting App image with the dev image indicated in the okteto.yml file. The dev image includes an SSH server to integrate with VS Code Remote SSH Development.
  • Locally expose the remote ports indicated by the okteto.yml file. In this case, it exposes the container port 22 to localhost:22000 to integrate with VS Code Remote SSH Development, using the public key you updated for authentication.
  • Initiate a synchronization loop to move your local files to the remote container (it uses syncthing under the hood). The synchronization is bidirectional, so every change done by VS Code in the remote container will be synced back to your local machine, and vice-versa, keeping you git flow untouched.

Now, everything is ready to set up your VS Code Remote SSH environment.

Run Remote-SSH: Connect to Host... from the Command Palette (F1) and enter -p 22000 [email protected] in the input box as follows:

If you’re not familiar with the VS Code Remote SSH extension, go here to learn more

After a moment, VS Code will connect to the SSH server and set itself up. Once you are connected, you’ll be in an empty window. Open the /src folder on the remote machine using File > Open... or File > Open Workspace....

Now that we are connected to our remote development environment, any terminal we open from VS Code will automatically run on the remote host rather than locally. Open a terminal with the Terminal > New Terminal and execute the following command:

1
2
3
4
5
6
7
8
$ python app.py
* Serving Flask app "app" (lazy loading)
* Debug mode: on
* Running on http://0.0.0.0:8080/
(Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 117-959-944

This command will start the python service on your remote development environment. You can verify that everything is up and running by going to your service’s endpoint at https://vote-[githubid].cloud.okteto.net.

Now let’s make a code change. Open app.py in VS Code and modify the getOptions function with the code below:

1
2
3
4
def getOptions():
optionA = 'Otters'
optionB = 'Dogs'
return optionA, optionB

Go to the browser again and reload the page. Your changes were applied instantly. No commit, build or push required 😎! And what is even more awesome, enjoy all the VS Code features and extensions, but run everything remotely in a production-like environment!

We recommend to keep Git extensions locally. This way they use your local keys and you don´t need to install them remotely.

Conclusions

We explained the advantages of developing directly in Kubernetes while keeping the same developer experience than working on a local machine. Working on the Cloud is always better. You don’t work on your spreadsheets and listen to media files locally, do you? Stop dealing with local environments and become a Cloud Native Developer today 😎!

Interested in improving your Kubernetes and Docker development workflows? Contact Okteto to see how to install our platform in your own infrastructure.


Accelerate Serverless Development with Cloud Run and Okteto

Google recently introduced Cloud Run, a new solution for deploying your code as containers with no infrastructure management. It is a step forward for serverless platforms, eliminating most of the architectural restrictions that Lambda functions have.

There is an official Cloud Run contract for supported containers that we can summarize in the following points:

  • Compile your container for 64-bit Linux;
  • Your container listens on port 8080 for HTTP requests;
  • Your HTTP must be ready within four minutes after receiving a request;
  • The available memory per request is 2GB;
  • Computation is stateless and scoped to a single request.

If your application meets these requirements it will work in Cloud Run. Note that there is no restriction on the programming language used by your application.

Cloud Run runs on top of Kubernetes, but you don’t need to know anything about Kubernetes to deploy your applications in Cloud Run. Let’s see how easy it is with a sample application, the Voting App.

Deploy the Voting App to Cloud Run

I assume you are already familiar with GCP (Google Cloud Platform) and Cloud Run. If you are not, check this excellent blog post to get a sense of it. For the purpose of this sample, we will just need a Project ID on which to deploy Cloud Run applications.

Get a local version of the Voting App, by executing the following commands from your terminal:

1
2
$ git clone [email protected]:okteto/samples.git
$ cd samples/python

You now have a functioning git repository that contains a simple python 3 application and a Dockerfile to generate the associate Docker Image. The Voting App consists of a flask app that allows you to vote for your favorite animals. Build the Docker image by executing:

1
$ gcloud builds submit --tag gcr.io/[project-id]/vote

After about 30 seconds you will have your Docker image built and uploaded to the Google Container Registry. Deploy your image to Cloud Run by executing the command below:

1
2
3
4
5
6
7
8
9
$ gcloud beta run deploy --image gcr.io/[project-id]/vote
Service name: (vote):
Deploying container to Cloud Run service [vote] in project [project-id] region [us-central1]
Allow unauthenticated invocations to new service [vote]? (y/N)? y
✓ Deploying new service... Done.
✓ Creating Revision...
✓ Routing traffic...
Done.
Service [vote] revision [vote-00001] has been deployed and is serving traffic at https://vote-cg2bjntyuq-uc.a.run.app

After another 30 seconds or so you will be able to browse to the generated URL and see the Voting App online! Really cool, isn’t it?

Develop the Voting App with Okteto

Now it is time to do some work on the Voting App. Building and deploying the Voting App to Cloud Run takes about 1 minute for every change we want to test. If you don’t want to kill your productivity, you will need to take a different approach.

Let me introduce you to Okteto. Okteto provides instant cloud-based environments to code and collaborate. Instead of having to build and deploy a container every time you want to see your changes in action, Okteto lets you develop your applications directly in the cloud.

The first thing we need to do is install the Okteto CLI by running the command below:

MacOS/Linux

$ curl https://get.okteto.com -sSfL | sh

Windows

$ wget https://downloads.okteto.com/cli/okteto-Windows-x86_64 -OutFile c:\windows\system32\okteto.exe

Once the CLI is installed, run the okteto login command to create your Okteto account and get an API token for your workstation.

1
$ okteto login

Run the okteto namespace command to fetch credentials for your Okteto personal namespace.

1
$ okteto namespace

Now start your Okteto Development Environment by executing the okteto up command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ okteto up
Deployment 'vote' doesn't exist. Do you want to create a new one? [y/n]: y
✓ Okteto Environment activated
✓ Files synchronized
✓ Your Okteto Environment is ready
Name: vote* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 117-959-944

After a couple of seconds the Voting App will be deployed. Browse http://0.0.0.0:8080/ or go to https://cloud.okteto.com and browse to the generated URL to see the Voting App online!

Note that Okteto creates HTTPs endpoints and takes care of the infrastructure for your Okteto Development Environment, but this environment isn’t highly available. It is just meant for development purposes.

Now you are ready to see the power of Okteto in action. Open your local IDE, go to app.py and modify the getOptions function with the code below:

1
2
3
4
def getOptions():
optionA = 'Otters'
optionB = 'Dogs'
return optionA, optionB

Go to the browser again and reload the page. Your changes were applied instantly. No commit, build or push required 😎!

Edit the source code as many times as you need. With Okteto you can iterate in your code instantly, instead of wasting minutes building and deploying images. This is possible because Okteto instantly synchronizes your local filesystem to your cloud development environment.

Once you are happy with your changes, deploy them to production with Cloud Run:

1
2
$ gcloud builds submit --tag gcr.io/[project-id]/vote
$ gcloud beta run deploy --image gcr.io/[project-id]/vote

Conclusion

We have shown how easy is to deploy applications in Cloud Run, and how to use Okteto to do efficient development in the cloud. And what it is even more awesome, is that you have been able to efficiently develop, build and deploy a Docker-based application without typing a single docker command, thanks to the combined powers of Cloud Run and Okteto!

Okteto provides replicable, cloud-based development environments, enabling a new level of team collaboration and integration with the rest of your cloud services. Try Okteto for free and learn how we can help you develop cloud-native applications faster than ever.

Lightweight Kubernetes development with k3s and Okteto

A couple of days ago, Rancher labs released k3s, a lightweight, fully compliant production-grade Kubernetes. The entire thing runs out of a 40MB binary, runs on x64 and ARM, and even from a docker-compose. Saying that this is a great engineering feat is an understatement.

I tried it out as soon as I saw the announcement. I expected their initial release to show promise, but to be rough around the edges. Was I in it for a surprise!

I decided to go with the docker-compose way so I didn’t have to deal with downloads, configs, and all that. I went ahead, got the compose manifest, and launched it.

1
2
3
4
5
6
$ mkdir ~/k3s
$ curl https://raw.githubusercontent.com/rancher/k3s/master/docker-compose.yml > ~/k3s/docker-compose.yml
$ cd ~/k3s
$ docker-compose up -d
Starting k3s_node_1 ... done
Starting k3s_server_1 ... done

After about 30 seconds, I had my k3s instance up and running.

k3s’ docker-compose drops the kubeconfig file in the same folder you started it at. Great pattern!

1
2
3
4
$ export KUBECONFIG=~/k3s/kubeconfig.yaml
$ kubectl --kubeconfig kubeconfig.yaml get node (k3s/default)
NAME STATUS ROLES AGE VERSION
df305e6358a6 Ready \<none> 5m16s v1.13.3-k3s.6

Once your cluster is ready, install Rancher’s local path provisioner, so we can use the local storage of your k3s node.

1
2
3
4
5
$ kubectl apply -f [https://gist.githubusercontent.com/rberrelleza/58705b20fa69836035cf11bd65d9fc65/raw/bf479a97e2a2da7ba69d909db5facc23cc98942c/local-path-storage.yaml](https://gist.githubusercontent.com/rberrelleza/58705b20fa69836035cf11bd65d9fc65/raw/bf479a97e2a2da7ba69d909db5facc23cc98942c/local-path-storage.yaml)

$ kubectl get storageclass
NAME PROVISIONER AGE
local-path (default) rancher.io/local-path 50s

We built okteto to quickly create development environments in your kubernetes cluster. k3s is a fully compliant Kubernetes distro. Will they work together? Only one way to figure it out (full disclosure: I’m one of Okteto’s founders).

For this experiment, I went with the movies app sample. I cloned the repository and deployed the app with kubectl.

1
2
3
4
5
6
7
8
$ git clone [https://github.com/okteto/samples.git](https://github.com/okteto/samples.git)
$ cd samples/vote
$ kubectl apply -f manifests
deployment.extensions/vote created
service/vote created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
movies-7cd9f9ddb-sflwf 1/1 Running 0 55s

Once the application is ready, I used okteto to launch my development environment in my k3s instance (install okteto from here).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ okteto up
✓ Okteto Environment activated
✓ Files synchronized
✓ Your Okteto Environment is ready
Name: vote

* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 899-835-619

With the application now running, I fired up my browser and went to http://localhost:8080 to see the app in all its high-res glory.

Finally, I went ahead and did some mock dev work to try my entire workflow. I opened vscode, modified the getOptions function in app.py with the following code, and saved my file:

1
2
3
4
def getOptions():
optionA = 'Otters'
optionB = 'Dogs'
return optionA, optionB

I went back to the browser. The changes were automatically reloaded (thanks to python hot reloader) without having to build a container, pick a tag, redeploy it or even refresh my browser! 😎

Conclusion

k3s is an amazing product. It has some issues (I couldn’t get outbound network connections to work). But If this is the first release, I can’t wait and see what they come up with in the near future.

Kudos to the team at Rancher for taking the fully compliant approach. With this, their users can leverage the entire ecosystem from day one!

Interested in improving your Kubernetes and Docker development workflows? Contact Okteto and stop waiting for your code to build and redeploy.

Develop helm applications directly in your kubernetes cluster

Deploying applications in Kubernetes can be complicated. Even the simplest application will require creating a series of interdependent components (e.g.namespace, RBAC rules, ingress, services, deployments, pods, secrets …), each with one or more YAML manifests.

Helm is the de-facto package manager for Kubernetes applications that allows developers and operators to easily package, configure, and deploy applications onto Kubernetes clusters. If you’re building an application that will run in Kubernetes, you should really look into leveraging Helm.

In this tutorial we’ll show you how to build your first Helm chart and how to use Okteto to develop your application directly in the cluster, saving you tons of time and integration problems.

This tutorial assumes that you have some Kubernetes knowledge and that you have access to a cloud provider, or you can set it up locally.

Helm 101

If you are new to Helm, I recommend you first go through one of the following articles:

Setup a Kubernetes cluster

The official Kubernetes setup guide covers this topic extensively. For the purpose of this tutorial, I recommend you either deploy a remote cluster via Digital Ocean’s Kubernetes service or locally with Minikube.

Install Helm

For OSX you can install it via brew by running the command below.

1
$ brew install helm

You can also install it via curl.

1
2
3
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh

Once it’s installed, initialize it by running the command below.

1
$ helm init

The full installation chart is available here.

Generate your initial chart

The easiest way to create a new chart is by using the helm create command to create the initial scaffold of your chart.

1
$ helm create mychart

Helm will create a new directory called mychart with the structure shown below.

1
2
3
4
5
6
7
8
9
10
mychart
|-- Chart.yaml
|-- charts
|-- templates
| |-- NOTES.txt
| |-- _helpers.tpl
| |-- deployment.yaml
| |-- ingress.yaml
| |-- service.yaml
|-- values.yaml

Deploy your chart

The default chart is configured to run an NGINX server exposed via a service with a ClusterIP. To access it externally, we’ll tell it to use a NodePort instead.

Deploy the chart using the helm install command.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ helm install --name myapp ./mychart --set service.type=NodePort
NAME: myapp
LAST DEPLOYED: Tue Jan 8 16:08:05 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME AGE
myapp-mychart 0s

==> v1beta2/Deployment
myapp-mychart 0s

==> v1/Pod(related)

NAME READY STATUS RESTARTS AGE
myapp-mychart-846949857-9d28t 0/1 Pending 0 0s

NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services myapp-mychart)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo [http://$NODE_IP:$NODE_PORT](http://%24NODE_IP:%24NODE_PORT)

The output of the install command displays a summary of the resources created, and it renders the contents of the NOTES.txt file. Run the commands listed there to get a URL to access the NGINX service.

For Minikube:

1
2
3
4
5
$ export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services myapp-mychart)

$ export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")

$ echo [http://$NODE_IP:$NODE_PORT](http://%24NODE_IP:%24NODE_PORT)

For a hosted Kubernetes cluster (like Digital Ocean’s or GKE):

1
2
3
4
5
$ export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services myapp-mychart)

$ export NODE_IP=$(kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath={.items[*].status.addresses[?\(@.type==\"ExternalIP\"\)].address})

$ echo [http://$NODE_IP:$NODE_PORT](http://%24NODE_IP:%24NODE_PORT)

Open the URL in your browser.

Develop your application

Once we have a basic chart up and running, it’s time to develop our own application. At this point, we would have to follow the typical developer workflow:

  1. Build and test the application locally
  2. Build a container
  3. Give the container a label
  4. Push the container to a registry
  5. Update the values in our chart to match the new Docker image
  6. Upgrade the chart
  7. Test your changes
  8. Go back to 1

Instead of following that workflow, we’re going to save time and friction by developing our application directly in the cluster. The Cloud Native way.

Cloud Native Development

Cloud Native Development is THE way to develop Cloud Native Applications. Instead of wasting time and resources by developing locally and then testing in the cluster, we just do everything directly in the cluster. We open sourced Okteto to make it easier than ever to become a Cloud Native Developer.

Install the latest version of Okteto.

1
2
$ brew tap okteto/cli
$ brew install okteto

The installation guide on the repo has instructions on how to do it for MacOS, Windows, and Linux.

For the purpose of this tutorial, we’ll use a simplified version of Docker’s famous Voting App. Run the following command to get the code locally.

1
$ git clone [https://github.com/okteto/vote](https://github.com/okteto/vote)

Open a second terminal window, and go to the vote folder. From there, run the okteto initcommand to initialize your Cloud Native Development environment. This command will create a file called okteto.yml with the content displayed below.

1
2
3
4
5
6
7
8
9
10
11
12
$ okteto init
Python detected in your source. Recommended image for development: okteto/python:3
Which docker image do you want to use for your development environment? [okteto/python:3]:

✓ Okteto manifest (okteto.yml)

$ cat okteto.yml
name: vote
image: okteto/python:3
command:
- bash
workdir: /usr/src/app

Open your favorite IDE, and replace the value of namewith the name of your deployment. It should look something like this:

1
2
3
4
5
**name: myapp-mychart**
image: okteto/python:3
command:
- bash
workdir: /usr/src/app

Run the okteto up to start your Cloud Native Development environment.

1
2
3
4
5
6
7
8
$ okteto up 
✓ Files synchronized
✓ Okteto Environment activated
Namespace: django-ramiro
Name: myapp-mychart

Welcome to your development environment. Happy coding!
okteto>

Once the environment is ready, start your application by executing the following commands in your Okteto terminal:

1
2
okteto> pip install -r requirements.txt
okteto> python app.py

At this point, your application is running directly in the cluster (our github repo has an in-depth explanation of how this works). Notice the processed bytext near the bottom, it’s your kubernetes namespace and pod name. Go back to your browser and reload your tab to see the application in action.

Try it out a few times, just to make sure everything works. Now open the source of the application on your favorite IDE. Edit the file vote/app.py and change the option_a in line 8 from “Cats” to “Otters”. Save your changes.

Go back to the browser, refresh the Voting App UI, and notice that your code changes are instantly applied. Try a few more changes.

Once you are done developing, pressctrl+c and exit on the terminal where okteto upis running to stop your environment (psst: Notice how you never used docker or kubectl while working on your app. Pretty cool no?).

Final testing

Once you’re satisfied with your code, let’s test it end to end:

  1. Run okteto down to restore the original deployment (this is so that helm can process the new changes).
1
2
$ okteto down
✓ Your Okteto Environment has been deactivated
  1. Build a docker image for the voteapplication and push it to the registry.
1
2
$ docker build -t $YOUR_DOCKER_USER/vote:okteto .
$ docker push $YOUR_DOCKER_USER/vote:okteto
  1. Create a file named updated-values.yaml inside the mychartfolder. We’ll use this file to override the default configuration of the chart. Set the values of image.repository and image.tag to match your newly built image.
1
2
3
image:
repository: your_docker_user/vote
tag: okteto
  1. Upgrade your application using Helm by running the command below.
1
$ helm upgrade myapp ./mychart --set service.type=NodePort --values=./mychart/updated-values.yaml
  1. Open your browser, and verify that your application is running correctly.

Once your application is ready, you can package by using the helm package command and even distribute it via a repo, or with Kubeapps. This article has a good explanation of the process.

Conclusion

Helm is a great modern choice for deploying and managing applications. But developing charts and applications using the traditional developer workflow is slow and full of friction. Developing directly in the cluster makes the entire process a lot more efficient. Okteto is here to help you with that.

Interested in improving your Kubernetes and Docker development workflows? Contact Okteto and stop waiting for your code to build and redeploy.