Deploying Docker Applications to Azure Kubernetes Service
Kubernetes as a foundational technology
In today’s cloud-native development landscape, Kubernetes has become a foundational technology for managing containerized applications. Its flexibility, scalability, and support across platforms make it a cornerstone of modern DevOps workflows.
This guide will walk you through deploying a Docker-based
application to Azure Kubernetes Service (AKS) using Azure
DevOps CI/CD pipelines. Along the way, you'll gain insights into
Kubernetes architecture, Azure Container Registry, and practical CI/CD
implementation strategies.
✅ Kubernetes History at a
Glance
- Origins
at Google: Kubernetes was born from Google’s internal container
orchestration system called Borg, which they’d been using for over a
decade to manage billions of containers per week.
- First
Announcement: Kubernetes was announced publicly by Google in June
2014.
- Open
Source Release: The first open-source version was released shortly
after, and it was donated to the Cloud Native Computing Foundation
(CNCF) in 2015.
- Version
1.0: Official Kubernetes v1.0 was released on July 21, 2015.
- Rapid
Growth: Since then, Kubernetes has had regular quarterly
releases, with strong backing from a broad community including Google,
Microsoft, Red Hat, AWS, and others.
- Why
Open Source?: Google open-sourced it to standardize orchestration
across clouds and avoid vendor lock-in, encouraging a neutral ecosystem
(via CNCF).
What is Kubernetes?
Kubernetes is an open-source container
orchestration system for automating application deployment, scaling, and
management. It handles the scheduling and operation of containerized
applications across clusters of machines, ensuring high availability, fault tolerance,
and optimal resource utilization.
Key Features
- Automated deployment & scaling
- Self-healing (automatic restarts and
replacements)
- Service discovery & load balancing
- Efficient resource utilization
- Declarative configuration and automation
Core Concepts: Pods and Nodes
🧱 Pods
A Pod is the smallest and most basic
deployable unit in Kubernetes. It represents a single instance of a running
process and may contain one or more containers that share:
- Storage volumes
- IP addresses
- Networking
- Configuration details
Each pod is scheduled to run on a Node and
represents an isolated environment.
Example: A frontend pod with one container, or a backend pod
with two containers and a shared volume.
🖥️ Nodes
A Node is a worker machine (VM or physical
server) that runs the containerized workloads. It includes:
- A container runtime (like Docker)
- Kubelet: to communicate with the Kubernetes control
plane
- Kube-proxy: for networking
Nodes are managed by the Kubernetes Master (or control plane),
which schedules and distributes pods.
Azure Kubernetes Service (AKS)
AKS is a managed Kubernetes service provided by
Microsoft Azure. It abstracts the complexity of Kubernetes setup and operation,
making it easy to deploy and manage containerized applications in the cloud.
Benefits of AKS:
- No need to manage master nodes
- Automatic upgrades and patching
- Integrated monitoring
- Horizontal scaling
- Secure with Azure Active Directory integration
- Native support for Kubernetes YAML deployments
AKS supports deploying infrastructure using Kubernetes manifests (YAML files),
allowing infrastructure-as-code integration directly into the deployment
pipeline.
Azure Container Registry (ACR)
The Azure Container Registry (ACR) is a
private registry service that stores Docker images for container deployments.
It enables:
- Secure image storage
- Integration with Azure services (App Services, AKS,
etc.)
- CI/CD pipelines pulling and pushing images
seamlessly
In this workflow, ACR is used to:
- Store built Docker images
- Serve as the image source for deployments to AKS
To enable AKS to pull images from ACR, a role assignment is
required to authenticate AKS with ACR using a service principal.
CI/CD Workflow Overview
Here’s the high-level deployment workflow using Azure
DevOps CI/CD pipelines:
🛠 Continuous Integration
(CI)
- Developer commits code → triggers CI pipeline
- CI pipeline:
- Pulls base Docker image (e.g., .NET Core)
- Builds and packages the application
- Pushes Docker image to ACR
- Publishes artifacts: .yaml manifest, .dacpac (SQL
package)
🚀 Continuous Deployment (CD)
- Release pipeline triggered post-build
- CD pipeline:
- Deploys database via .dacpac to Azure
SQL
- Authenticates AKS with ACR
- Deploys image to AKS using Kubernetes manifest
- Creates a load balancer for frontend access
Detailed Demo Steps
📦 Step 1: Project &
Extension Setup
- Create an Azure DevOps project
- Install necessary extensions:
- Kubernetes: for image deployments
- Replace Tokens: for dynamic config
replacements in YAML and appsettings
🌐 Step 2: Provision Azure Resources
Using Azure Cloud Shell & CLI:
- Create a Resource Group
- Create AKS cluster (2 pods, 3
nodes)
- Create Azure Container Registry (ACR)
- Create Azure SQL Server and Database
Enable monitoring during AKS creation to utilize built-in logging and
performance tracking.
🔐 Step 3: Grant AKS
Access to ACR
- Retrieve the AKS service principal ID
- Retrieve the ACR resource ID
- Create a role assignment so AKS
can pull images from ACR
🧪 Step 4: Configure CI Build Pipeline
Build tasks include:
- Replace Tokens in appsettings.json and aks-deployment.yaml
- Pull base image (.NET Core) from public
registry
- Build Docker image with application
- Tag and push image to ACR
- Publish artifacts (YAML & DACPAC)
🚀 Step 5: Configure CD Release Pipeline
Release tasks include:
- Deploy Database using DACPAC
- Authenticate with ACR and AKS
- Deploy Kubernetes resources:
·
Use aks-deployment.yaml to create
a LoadBalancer
·
Apply image updates to AKS
Replace Tokens task ensures environment-specific values
(e.g., server names) are injected into the manifests dynamically.
Kubernetes Dashboard & Monitoring
After deployment:
- Access the web application via the external IP
of the frontend pod
- Install and configure kubectl to interact
with the AKS cluster
- Use kubectl proxy to access
the Kubernetes Dashboard
- Grant permissions to the dashboard's service
account using cluster role binding
Monitoring with Azure:
- Navigate to Insights in the AKS
resource
- Monitor:
- Nodes
- Pods
- Controllers
- Containers
You can scale the cluster, upgrade Kubernetes versions, or
manage resources directly from Azure Portal or via CLI.
Deploying Docker-based applications to Azure Kubernetes Service using a fully
automated CI/CD pipeline provides a robust, scalable, and production-ready
DevOps workflow. By leveraging:
- AKS for orchestration,
- ACR for image management, and
- Azure DevOps for pipeline automation
You can achieve seamless deployments, efficient scaling, and reliable
management of containerized services.
Whether you're building for microservices, scaling
enterprise systems, or automating database deployments, this setup provides a
solid foundation for continuous delivery in a cloud-native world.
Containers (like Docker) allow you to package applications
with their dependencies into isolated units. But managing containers at scale,
networking them, scaling them, recovering from failure, deploying updates is
incredibly complex.
That’s exactly what Kubernetes solves. It
orchestrates these containers, placing them on nodes, monitoring their health,
and scaling them up/down automatically.
