Upgrade your Okteto instance
Upgrade your Okteto Instance
To ensure a successful Okteto upgrade, we recommend testing it on a dedicated test cluster. Please contact your sales representative to request a test license for your test cluster.
To upgrade a new release, modify the config.yaml
with your desired changes and then use:
helm repo update
helm upgrade okteto okteto/okteto -f config.yaml --namespace=okteto --version <version_number>
For example:
helm repo update
helm upgrade okteto okteto/okteto -f config.yaml --namespace=okteto --version 1.25.0
You can use helm ls
to find the name of your release.
Please review the release notes before upgrading. New features, known issues, and configuration changes will be listed there.
Upgrading to Okteto 1.25.x
Starting with the 1.25 release, Okteto will use CLI version 3.0.0 when deploying and destroying pipelines. Please familiarize yourself with the changes in Okteto CLI 3.0.0 before upgrading your cluster.
Upgrading to Okteto 1.22.x
As part of the 1.22 release, we have reorganized some helm settings that allow you to configure annotations, labels, and role bindings to apply to user's service accounts. Settings changed are the following:
clusterRole
setting is now moved toserviceAccounts.roleBindings.namespaces
globalClusterRole
setting is now moved toserviceAccounts.clusterRoleBinding
user.serviceAccounts.annotations
is now moved toserviceAccounts.annotations
user.serviceAccounts.labels
is now moved toserviceAccounts.labels
user.extraRoleBindings.roleBindings
is now moved toserviceAccounts.extraRoleBindings
Summarizing, in case you have all or some of these settings configured as they were before:
clusterRole: cluster-admin
globalClusterRole: example-cluster-role
user:
serviceAccount:
annotations:
custom.annotation/one: one
custom.annotation/two: two
labels:
custom.label/one: one
custom.label/two: two
extraRoleBindings:
enabled: true
roleBindings:
namespace-name1:
- cluster-role1
- cluster-role2
namespace-name2:
- cluster-role3
The configuration for 1.22 versions would be the following:
serviceAccounts:
annotations:
custom.annotation/one: one
custom.annotation/two: two
labels:
custom.label/one: one
custom.label/two: two
roleBindings:
namespaces: cluster-admin
clusterRoleBinding: example-cluster-role
extraRoleBindings:
namespace-name1:
- cluster-role1
- cluster-role2
namespace-name2:
- cluster-role3
Old settings are still considered and have preference, but their support will be removed in a future version. Please consider migrating them as soon as possible.
We have also made another relevant change to consider when migrating to 1.22
. We have stopped creating an automatic role binding to any custom service account created on namespaces managed by Okteto to the cluster role specified on serviceAccounts.roleBindings.namespaces
(clusterRole
in previous versions). If you create any custom service account as part of your application, make sure you also create any role or role binding needed.
If you experience issues with permissions in this regard, we recommend creating an explicit role and role-binding, but as an alternative solution you can also manually create the role-binding to the cluster-admin
role, using the following approach:
- Create the
rb.yml.tpl
file with the following content:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: <service-account-name>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: <service-account-name>
namespace: ${OKTETO_NAMESPACE}
- Replace
<service-account-name>
with the name of the service account you want to bind to thecluster-admin
role. - In the Okteto Manifest deploy commands, use
envsubst
to replace the${OKTETO_NAMESPACE}
variable:
deploy:
- envsubst < k8s/rb.yml.tpl > k8s/rb.yml
- kubectl apply -f k8s
Upgrading to Okteto 1.20.x
In the Okteto 1.20 release, we have enabled Pull Secrets by default. If you do no wish to manage Pull Secrets the following helm values should be disabled:
regcredsManager.pullSecrets.enabled
should be set tofalse
daemonset.configurePrivateRegistriesInNodes.enabled
should be set tofalse
(if you are already preventing Okteto to manage credentials in the node, this may already be configured correctly)
Upgrading to Okteto 1.15.x
As part of 1.15.0, we have enabled by default chroot mode of the ingress controllers deployed as part of Okteto in order to reduce the resources accessible by the nginx process. This change will be applied automatically as part of the upgrade process to 1.15.x
, but it is important to know that the ingress controller pods need the capability SYS_CHROOT
to work, so you have to make sure you can use it in your cluster.
Chroot mode can be disabled anytime in both ingress controllers by setting the following values in your helm configuration:
ingress-nginx:
controller:
image:
chroot: false
okteto-nginx:
controller:
image:
chroot: false
Upgrading to Okteto 1.14.x
As part of 1.14.0, we have changed the way you can configure credentials for private registries. Before, you had to specify them in your helm configuration under privateRegistry
key. From now on, that section (privateRegistry
) is deprecated and we have added new options in the Admin view to manage them there.
In order to simplify the migration for all the instances already using that setting, Okteto will automatically migrate the credentials specified in the helm chart to secrets within the cluster as part of the upgrade process to 1.14.x
. So, from that moment, any change in those credentials has to be done through the Admin View, and you can remove the privateRegistry
section from your helm configuration and upgrade Okteto again to completely stop using the deprecated privateRegistry
section, as they don't apply anymore.
If you don't use privateRegistry
key, you can just upgrade Okteto as any other release. If you do use it, we recommend you follow these steps to upgrade your instance to 1.14.x
or higher:
- Make an upgrade to the target version keeping the
privateRegistry
section in your helm configuration. - Verify that credentials were successfully migrated. You can do that going to Registry Credentials section within Admin Dashboard and checking that all credentials are there.
- Once you have verified that the credentials were successfully migrated, you can remove the
privateRegistry
section from your helm configuration and upgrade Okteto again.
If you miss any registry or they weren't migrated at all, you can check the logs of a job called <helm-release>-regcreds-migration
to see the root cause.
Upgrading to Okteto 1.7.x
As part of 1.7.0, we have changed the default behavior of BuildKit to don't use a persistent volume. You have to include the following setting in your configuration file to keep using persistency:
buildkit:
persistence:
enabled: true
We have also moved the configuration settings needed to configure the external storage for the registry. Before, it was configured under cloud
key and now it has to be configured under registry
key.
So, before 1.7.X
, the configuration to configure external storage would look like this:
cloud:
provider:
aws:
enabled: true
region: us-west-2
bucket: okteto-private-images
iam:
accessKeyID: "XXXXXX"
That configuration will still work for backward compatibility, but we recommend you to update it to the new keys. In order to do so, you need to change to this:
registry:
secret:
name: "okteto-cloud-secret"
secretKey: "key"
storage:
provider:
aws:
enabled: true
region: us-west-2
bucket: okteto-private-images
iam:
accessKeyID: "XXXXXX"
If you didn't override a value for
cloud.secret
, you need to include theregistry.secret
section as posted in the previous snippet to keep working with the previous default values specified by the Okteto chart.If you had a custom value for
cloud.secret
, you need to specify the right values underregistry.secret
key.
Upgrade to Kubernetes 1.25
Kubernetes version 1.25 no longer supports pod security policies. If you are running Okteto in Kubernetes < 1.25, the default installation of Okteto creates pod security policies. If you upgrade Kubernetes to version 1.25 before upgrading Okteto, you will encounter the following error during the upgrade process:
Error: UPGRADE FAILED: unable to build kubernetes objects from current release manifest: [resource mapping not found for name: "okteto-permissive" namespace: "" from "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first, resource mapping not found for name: "okteto-restrictive" namespace: "" from "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first]
You can find more information about how to recover from this state in this community link.
To avoid this issue, upgrade Okteto before upgrading Kubernetes to version 1.25 and add the following configuration to your helm values file:
podSecurityPolicy:
enabled: false