
A comprehensive guide to containerizing ASP.NET Core applications, managing them with Docker Compose for development, and orchestrating production deployments with Kubernetes.
Behnam Nouri
Building scalable backend applications requires more than just solid code—it demands a robust deployment strategy. In my journey as a senior C# developer transitioning into DevOps, I've learned that containerization and orchestration are essential skills for modern backend engineers. This article walks through my practical approach to scaling ASP.NET Core applications from local development to production-grade Kubernetes clusters.
Before diving into code, let me clarify why containers matter for backend developers. Traditional deployment approaches often suffer from the classic "it works on my machine" problem. Docker solves this by packaging your entire application environment—code, dependencies, runtime—into a reproducible unit called a container.
For one of my recent projects, I had a Web API running perfectly on my Windows machine using Visual Studio and LocalDB, but deployment to a Linux server created unexpected issues with database connectivity, environment variables, and dependency versions. This is where containers became invaluable.
The foundation of containerization is the Dockerfile—a blueprint that defines how your application image is built. Here's my approach for a production-grade ASP.NET Core API:
# Multi-stage build - reduces final image size significantly
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS builder
WORKDIR /src
# Copy project files
COPY ["MyApi.csproj", "./"]
RUN dotnet restore "MyApi.csproj"
# Copy source and build
COPY . .
RUN dotnet build "MyApi.csproj" -c Release -o /app/build
# Publish stage
FROM builder AS publish
RUN dotnet publish "MyApi.csproj" -c Release -o /app/publish /p:UseAppHost=false
# Runtime stage - minimal image
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app
COPY --from=publish /app/publish .
EXPOSE 8080
ENV ASPNETCORE_URLS=http://+:8080
ENTRYPOINT ["dotnet", "MyApi.dll"]
Why this structure matters: Multi-stage builds reduce your final image size by ~80%. The builder stage includes the entire SDK, but the final runtime stage only includes the ASP.NET Core runtime—much smaller and more secure.
When building production Docker images for C# applications, I always follow these practices:
Use specific SDK versions: Never use latest tags. Pinning to 8.0 ensures reproducible builds across environments.
Leverage .NET's UseAppHost=false: This creates platform-independent binaries that work reliably in containers.
Set proper environment variables: The ASPNETCORE_URLS variable ensures your application listens on the correct port inside the container.
Use non-root users: For production, I always add a non-root user to reduce security risks:
RUN useradd -m -u 1000 appuser && chown -R appuser /app
USER appuser
Docker Compose transforms how I develop applications. Instead of running databases locally and managing multiple services manually, I define everything in a single file.
Here's a realistic setup I use for ASP.NET Core projects with multiple services:
version: "3.8"
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ConnectionStrings__DefaultConnection=Server=sqlserver;Database=MyAppDb;User Id=sa;Password=YourPassword123!;Encrypt=false;
depends_on:
- sqlserver
volumes:
- .:/src
networks:
- myapp-network
sqlserver:
image: mcr.microsoft.com/mssql/server:2022-latest
environment:
ACCEPT_EULA: Y
SA_PASSWORD: YourPassword123!
ports:
- "1433:1433"
volumes:
- sqlserver-data:/var/opt/mssql
networks:
- myapp-network
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis-data:/data
networks:
- myapp-network
volumes:
sqlserver-data:
redis-data:
networks:
myapp-network:
driver: bridge
The beauty of this setup is that it replicates your production environment locally. Every developer on the team runs the exact same services with identical configurations. When I run docker-compose up, I get:
One challenge I solved was handling Entity Framework migrations in containers. Here's my approach:
api:
...
entrypoint:
- /bin/sh
- -c
- |
dotnet ef database update
dotnet MyApi.dll
This ensures migrations run automatically when the container starts. In production, I handle this differently through dedicated migration jobs.
After mastering Docker Compose for local development, scaling to Kubernetes might seem overwhelming. But understanding the core concepts makes it manageable.
Before jumping to Kubernetes, I always ask: do we need it? Docker Compose works wonderfully for single-server deployments. Kubernetes is necessary when you need:
For my current projects, I use Kubernetes because we need automatic failover and the ability to scale API instances based on demand.
The transition from Docker Compose to Kubernetes involves translating your compose file into Kubernetes manifests. Here's my ASP.NET Core deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapi-deployment
spec:
replicas: 3
selector:
matchLabels:
app: myapi
template:
metadata:
labels:
app: myapi
spec:
containers:
- name: myapi
image: myregistry.azurecr.io/myapi:v1.0.0
ports:
- containerPort: 8080
env:
- name: ASPNETCORE_ENVIRONMENT
value: "Production"
- name: ConnectionStrings__DefaultConnection
valueFrom:
secretKeyRef:
name: db-secret
key: connection-string
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
What each section does:
A deployment alone doesn't expose your application. I use a Service to enable internal communication and Ingress for external access:
apiVersion: v1
kind: Service
metadata:
name: myapi-service
spec:
selector:
app: myapi
type: ClusterIP
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapi-ingress
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapi-service
port:
number: 80
Never embed credentials in Docker images or Kubernetes manifests. I always use Kubernetes Secrets:
kubectl create secret generic db-secret \
--from-literal=connection-string="Server=sqlserver;Database=MyApp;User Id=sa;Password=SecurePassword;"
My deployment workflow uses GitHub Actions to automatically build and push Docker images:
name: Build and Push Docker Image
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Login to Registry
uses: docker/login-action@v2
with:
registry: myregistry.azurecr.io
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Build and Push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: myregistry.azurecr.io/myapi:${{ github.sha }}
Kubernetes is stateless by nature, so stateful components like databases require special attention. For production, I recommend:
In my first production Kubernetes deployment, I hardcoded environment-specific values and quickly realized the nightmare of maintaining multiple manifests. Now I use tools like Helm for templating:
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
replicas: {{ .Values.replicas }}
Without proper resource requests and limits, Kubernetes containers consume unbounded resources, impacting other applications. I learned this the hard way when a memory leak in one API pod crashed the entire node. Now I always:
requests based on load testinglimits at 1.5-2x the requests to allow temporary spikesKubernetes needs to know when your application is healthy. Generic HTTP checks aren't always sufficient. For C# applications, I implement detailed health checks:
app.MapHealthChecks("/health", new HealthCheckOptions
{
ResponseWriter = WriteResponse
});
services.AddHealthChecks()
.AddCheck("database", () => CheckDatabaseConnection())
.AddCheck("redis", () => CheckRedisConnection());
Version everything: Don't rely on latest tags. Use semantic versioning (v1.2.3) for reproducible deployments.
Implement graceful shutdown: Kubernetes sends SIGTERM before killing containers. Configure your ASP.NET Core app to handle this properly:
var lifetime = app.Lifetime;
lifetime.ApplicationStopping.Register(OnShutdown);
void OnShutdown()
{
// Complete in-flight requests, close connections, cleanup resources
}
Monitor and log: Use structured logging and ensure your logs reach a centralized system. Kubernetes doesn't persist container logs.
Test your disaster recovery: Regularly simulate failures to ensure your applications recover properly.
Implementing this architecture across my projects has resulted in:
The landscape of containerization and orchestration continues to evolve. New tools emerge, best practices refine, and cloud platforms offer increasingly sophisticated managed services. The foundation I've shared—understanding containers, mastering Docker Compose, and implementing Kubernetes strategically—remains relevant across these changes.
What aspects of containerization and orchestration are you implementing in your own projects? I'd love to hear about your experiences and the unique challenges you've overcome.
Entity Framework Core Performance Optimization: Querying, Tracking, and Caching Strategies
Advanced EF Core optimization techniques including query analysis, lazy loading pitfalls, change tracking, batch operations, and caching patterns for production APIs.
Scaling ASP.NET Core Applications with Docker and Kubernetes: A Practical Guide
A comprehensive guide to containerizing ASP.NET Core applications, managing them with Docker Compose for development, and orchestrating production deployments with Kubernetes.