The following steps provision an AKS cluster with a managed node pool, attachs the created Azure Virtual Network, and grant AKS cluster identity the right to pull images from ACR.
Adding Configuration
The pulumi config CLI command can save some values as configuration parameters. Run the following commands to set the names for some of values that may its reusable in multiple environments:
$pulumi config set k8sVersion 1.30.3
$pulumi config set nodeCount 3
$pulumi config set nodeSize Standard_A2_v2
$pulumi config set adminUser ingress-nginx
$pulumi config set ingressNamespace 1.30.3
$pulumi config set appNamespace apps
$pulumi config set letenscriptEmail <your_email>
Read Config Values
You modify the config.ts file and add the following code:
import*aspulumifrom"@pulumi/pulumi";// Create a configuration objectconstpulumiConfig=newpulumi.Config();// Access a configuration value and export for reusingexportconstconfig={location:pulumiConfig.get("azure-native:location"),k8sVersion:pulumiConfig.get("k8sVersion") ||"1.30.3",nodeCount:pulumiConfig.getNumber("nodeCount") ||3,nodeSize:pulumiConfig.get("nodeSize") ||"Standard_A2_v2",adminUserName:pulumiConfig.get("adminUser") ||"aksadmin",ingressNamespace:pulumiConfig.get("ingressNamespace") ||"ingress-nginx",appNamespace:pulumiConfig.get("appNamespace") ||"apps",letenscriptEmail:pulumiConfig.get("letenscriptEmail") ||"<your_email>"}
Create a Azure Kubernetes Cluster
In order to AKS cluster, you create a new akscluster.ts file in the resources folder. And add the following code creates managed cluster:
Explanation of Key Parts in the Code:
AKS Cluster: The core of the setup, an AKS cluster is created with:
agentPoolProfiles: Defines the VM size, node count, and operating system for the worker nodes.
enableRBAC: Enables Kubernetes Role-Based Access Control (RBAC) for cluster management.
networkProfile: Specifies the use of the Azure CNI plugin for network connectivity between pods and Azure resources.
identity: Assigns a UserAssignedManaged Identity, used for integrating with other Azure services securely.
Kubeconfig: The Kubernetes configuration is exported as an output, allowing you to connect to the AKS cluster using tools like kubectl.
Grant the AKS Cluster Identity the AcrPull Role on ACR
Once the AKS cluster and ACR are created, the next step is to assign the AcrPull role to the AKS cluster's managed identity. This is done by creating a role assignment that links the AKS cluster’s managed identity to the ACR.
Now from the index.ts file, you modify the code that includes aks cluster.
Once you’re ready, deploy the AKS cluster by running:
After the cluster is deployed, Pulumi will output acr, the kubeconfig and resourceGroupName.
You can save the kubeconfig to a file and connect to your AKS cluster using kubectl:
More about kubeconfig environment, you can be found at here
You can now use the following command to interact with your Kubernetes cluster:
The output looks like that:
Scaling and Managing the AKS Cluster
You can manage the AKS cluster post-deployment in various ways:
Scaling Nodes: Modify the count in agentPoolProfiles to scale the number of worker nodes, then run pulumi upto apply the changes.
Auto-Scaling: You can enable auto-scaling by adding enableAutoScaling and specifying the minimum and maximum node counts in the agentPoolProfiles configuration.
Upgrades: AKS provides automated upgrades to Kubernetes versions. You can trigger upgrades via the Azure portal, CLI, or integrate it with Pulumi to automate version updates.
// resources/akscluster.ts
import * as azure_native from "@pulumi/azure-native";
import * as pulumi from "@pulumi/pulumi";
import * as tls from "@pulumi/tls";
import * as containerservice from "@pulumi/azure-native/containerservice";
import { config } from "../config";
export const aksCluster = (
resourceGroupName: pulumi.Input<string>,
subnetIds: {
nodeSubnetId: pulumi.Output<string>,
podSubnetId: pulumi.Output<string>
}) => {
// create a private key to use for the cluster's ssh key
const privateKey = new tls.PrivateKey("privateKey", {
algorithm: "RSA",
rsaBits: 4096,
});
// create a user assigned identity to use for the cluster
const identity = new azure_native.managedidentity.UserAssignedIdentity("identity", { resourceGroupName: resourceGroupName });
return new containerservice.ManagedCluster("cluster", {
resourceGroupName: resourceGroupName,
// Use a user-specified identity to manage cluster resources
identity: {
type: azure_native.containerservice.ResourceIdentityType.UserAssigned,
userAssignedIdentities: [identity.id],
},
agentPoolProfiles: [{
count: config.nodeCount, // Number of nodes in the pool
maxPods: 110,
mode: "System",
name: "agentpool",
nodeLabels: {},
osDiskSizeGB: 30,
osType: "Linux",
type: "VirtualMachineScaleSets",
vmSize: config.nodeSize, // VM size for the nodes
vnetSubnetID: subnetIds.nodeSubnetId, // Assign nodes to the subnet
podSubnetID: subnetIds.podSubnetId // Assign pods to the subnet
}],
dnsPrefix: resourceGroupName,
enableRBAC: true, // Enable Role-Based Access Control
kubernetesVersion: config.k8sVersion,
linuxProfile: {
adminUsername: config.adminUserName, // The admin username for the new cluster.
ssh: {
publicKeys: [{
keyData: privateKey.publicKeyOpenssh,
}],
},
},
networkProfile: {
networkPlugin: "azure" // Use Azure CNI for networking
}
});
};
...
import { aksCluster } from "./resources/akscluster";
...
// Create a AKS cluster
const cluster = aksCluster(resourceGroup.name, vnet);
// Grant AKS Managed Identity `AcrPull` Role on ACR
const acrPullRoleAssignment = new azure.authorization.RoleAssignment("aksAcrPullRoleAssignment", {
principalId: aksCluster.identity.apply(id => id.principalId), // AKS cluster's Managed Identity
roleDefinitionId: pulumi.output(azure.authorization.getRoleDefinition({
roleDefinitionId: "7f951dda-4ed3-4680-a7ca-43fe172d538d", // Built-in role for AcrPull
scope: acr.id, // Scope is the ACR
})).apply(roleDef => roleDef.id),
scope: acr.id, // ACR resource ID
});
// Export the AKS Cluster kubeconfig
export const kubeconfig = pulumi.all([cluster.name, resourceGroup.name]).apply(([clusterName, rgName]) =>
azure_native.containerservice.listManagedClusterUserCredentials({
resourceGroupName: rgName,
resourceName: clusterName,
}).then(creds => Buffer.from(creds.kubeconfigs[0].value, "base64").toString())
);
const provider = new k8s.Provider("k8s-provider", {
kubeconfig: kubeconfig,
});
...
NAME STATUS ROLES AGE VERSION
aks-agentpool-36574824-vmss000000 Ready <none> 74m v1.30.3
aks-agentpool-36574824-vmss000001 Ready <none> 74m v1.30.3
aks-agentpool-36574824-vmss000002 Ready <none> 74m v1.30.3
agentPoolProfiles: [{
name: "agentpool",
minCount: config.nodeCount, // Minimum node count
maxCount: 5, // Maximum node count
enableAutoScaling: true, // Enable auto-scaling
count: config.nodeCount, // Number of nodes in the pool
maxPods: 110,
mode: "System",
nodeLabels: {},
osDiskSizeGB: 30,
osType: "Linux",
type: "VirtualMachineScaleSets",
vmSize: config.nodeSize, // VM size for the nodes
vnetSubnetID: subnetIds.nodeSubnetId, // Assign nodes to the subnet
podSubnetID: subnetIds.podSubnetId // Assign pods to the subnet
}]