Deploy OKE nodes using Ubuntu images¶
Ubuntu images are available for worker nodes on Oracle Kubernetes Engine (OKE) in Oracle Cloud. Currently there are only a select number of suites and Kubernetes versions supported due to this being a Limited Availability release.
For node stability, the unattended-upgrades
package has been removed from the Ubuntu image for OKE. Should your nodes need updates or security patches then refer to the Oracle documentation on node cycling for managed nodes and node cycling for self-managed nodes.
Available releases¶
Ubuntu Release |
OKE Version |
Location |
---|---|---|
22.04 (Jammy Jellyfish) |
1.29 |
Networking plugin availability¶
The availability of networking plugins (Flannel / VCN Native) depends on the type of OKE node being used:
Node Type |
Plugin |
Supported |
---|---|---|
Managed |
Flannel |
Yes |
VCN Native |
Yes |
|
Self-Managed |
Flannel |
Yes |
VCN Native |
No |
Prerequisites¶
You’ll need:
Oracle Cloud compartment to create the nodes.
Configured and running OKE cluster on Oracle Cloud.
Oracle’s
oci
CLI installed.kubectl
installed (Self-Managed only).
Find an Ubuntu image¶
Select a version from the available releases. The images are listed as JSON in ascending order, therefore the latest image will be at the bottom. Make note of the image path for the image you choose. The image path conforms to the following format:
<suite>/oke-<version>/<serial>/<image-name>.img
If you wish to get the latest image path, use the following command:
curl <available-releases-location-link> | jq ".[][-2] | .name"
Register an Ubuntu image¶
Images must be registered to be used with Oracle Cloud services. To learn more, refer to the Oracle Cloud documentation for managing custom images.
When registering images, the Launch mode is an option to configure. The suggested configurations are PARAVIRTUALIZED for virtual nodes and NATIVE for bare-metal nodes.
Start the registration process in Oracle Cloud by navigating to Compute > Custom Images and select Import Image. Select Import from an Object Storage URL, then paste the available releases location link with your concatenated image path into the Object Storage URL field. The URL format pasted should conform to the following:
<available-releases-location-link>/<image-path>
In the rest of the form, you must provide your Compartment, Image name, and Launch mode. Additionally the fields Operating System and Image type must be provided and use Ubuntu
and QCOW2
, respectively.
Lastly, select Import image and wait for the registration to complete. This process is expected to take a while.
The following command will directly import your image from a provided URI. You’ll have to provide the values below with the exception of operating-system
and source-image-type
which are already provided.
For more information on this command, refer to the oci
docs for import from-object-uri.
oci compute image import from-object-uri \
--compartment-id <compartment-id> \
--uri <available-release-location-link>/<image-path> \
--display-name <image-name> \
--launch-mode <launch-mode> \
--image-source-object-name <object-name> \
--operating-system "Ubuntu" \
--operating-system-version <ubuntu-version-number> \
--source-image-type QCOW2
Create OKE nodes with Ubuntu Images¶
The following steps on creating nodes assume that you have an existing OKE cluster on Oracle Cloud, but it is not required to have existing nodes. If you don’t have an OKE cluster prepared then Oracle’s documentation for creating a cluster is a good place to start.
Create managed OKE nodes with Ubuntu¶
Managed nodes are node instances whose lifecycle is managed by the OKE service.
Since this is a Limited Availability release of Ubuntu images for OKE, you can only create managed nodes through the Oracle Cloud API (oci
CLI or SDK). The ability to create managed nodes from the Oracle Cloud UI will be added later.
To create a managed node, start by copying the following cloud-init script into a file called user-data.yaml
.
#cloud-config
runcmd:
- oke bootstrap
Then, create a placement configuration file to specify where in Oracle Cloud the managed node pool should be created and save the file as placement-config.json
.
[{
"compartmentId":"<compartment-id>",
"availabilityDomain":"<availability-domain>",
"subnetId":"<subnet-id>"
}]
Lastly, replace the values and run the following command to create the managed node pool:
oci ce node-pool create \
--cluster-id=<cluster-id> \
--compartment-id=<compartment-id> \
--name=<pool-name> \
--node-shape=<node-shape> \
--size=<pool-count> \
--kubernetes-version="1.29.1" \
--node-image-id=<ubuntu-image-id> \
--placement-configs="$(cat placement-config.json)" \
--node-metadata='{"user_data": "'"$(base64 user-data.yaml)"'"}'
View the node pool status in Oracle Cloud by navigating to Kubernetes Clusters (OKE) and choosing your cluster, then select Resources > Node pools and select the latest node pool.
Everything will be running as expected when the Kubernetes node condition and Node state of all the nodes are labelled Ready.
Create self-managed OKE nodes with Ubuntu¶
The following instructions assume that you have configured your OKE cluster to work with self-managed nodes. If you have not done this, refer to the Oracle documentation for working with self-managed nodes
Before adding a self-managed node, ensure that you have configured kubectl
for your OKE cluster with the following command. This process will be easier if kubectl
is configured for a single OKE cluster.
kubectl cluster-info
Next, the self-managed node will need a custom cloud-init script which needs some specific values, namely a Kubernetes certificate from the OKE cluster and the Kubernetes API private endpoint.
Obtain the Kubernetes certificate for the current context with the following command:
kubectl config view --minify --raw -o json | jq -r '.clusters[].cluster."certificate-authority-data"'
Then obtain the Kubernetes API private endpoint
from Oracle Cloud by navigating to Kubernetes Cluster (OKE) and selecting your cluster. Be sure to copy only the IP, not the port.
Alternately, use the following oci
command to obtain the Kubernetes API private endpoint
:
oci ce cluster get --cluster-id <cluster-id> | jq -r '.data.endpoints.private-endpoint' | cut -d ":" -f1
Use these obtained values (certificate-data and private-endpoint) in the following example and save it as user-data.yaml
.
#cloud-config
runcmd:
- oke bootstrap --ca <certificate-data> --apiserver-host <private-endpoint>
write_files:
- path: /etc/oke/oke-apiserver
permissions: '0644'
content: <private-endpoint>
- encoding: b64
path: /etc/kubernetes/ca.crt
permissions: '0644'
content: <certificate-data>
Now, create the self-managed node in Oracle Cloud by navigating to Compute > Instance and select Create Instance. Next, select Change Image > My Images, and then select the Ubuntu image you recently registered.
Setup the cloud-init for the instance by selecting Show advanced options > Paste cloud-init script, and then paste your completed cloud-init script (the one saved in user-data.yaml
).
Lastly, select Create and wait for your instance to be provisioned.
The following command will create an instance with your previously created user-data.yaml
. The value for subnet-id
should correspond with the subnet used for the nodes in your OKE cluster.
oci compute instance launch \
--compartment-id <compartment-id> \
--availability-domain <availability-domain> \
--shape <instance-shape> \
--image-id <ubuntu-image-id> \
--subnet-id <subnet-ocid> \
--user-data-file user-data.yaml \
--display-name <instance-name>
Self-managed nodes cannot be viewed from Oracle Cloud so you can poll their status with the following command. The process for nodes joining the cluster will take several minutes.
watch 'kubectl get nodes'
Once your node is in Ready state, then everything is running as expected and your self-managed node is ready to accept pods.
Further references¶
For more information about oci
CLI and managing self-managed nodes on your cluster, refer to the Oracle Documentation: