Blog
Aug 24, 2025 - 12 MIN READ
Scaling ASP.NET Core Applications with Docker and Kubernetes: A Practical Guide

Scaling ASP.NET Core Applications with Docker and Kubernetes: A Practical Guide

A comprehensive guide to containerizing ASP.NET Core applications, managing them with Docker Compose for development, and orchestrating production deployments with Kubernetes.

Behnam Nouri

Behnam Nouri

Building scalable backend applications requires more than just solid code—it demands a robust deployment strategy. In my journey as a senior C# developer transitioning into DevOps, I've learned that containerization and orchestration are essential skills for modern backend engineers. This article walks through my practical approach to scaling ASP.NET Core applications from local development to production-grade Kubernetes clusters.

Understanding the Container Journey

Before diving into code, let me clarify why containers matter for backend developers. Traditional deployment approaches often suffer from the classic "it works on my machine" problem. Docker solves this by packaging your entire application environment—code, dependencies, runtime—into a reproducible unit called a container.

The Problem We're Solving

For one of my recent projects, I had a Web API running perfectly on my Windows machine using Visual Studio and LocalDB, but deployment to a Linux server created unexpected issues with database connectivity, environment variables, and dependency versions. This is where containers became invaluable.

Phase 1: Containerizing Your ASP.NET Core Application

Creating Your First Dockerfile

The foundation of containerization is the Dockerfile—a blueprint that defines how your application image is built. Here's my approach for a production-grade ASP.NET Core API:

# Multi-stage build - reduces final image size significantly
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS builder
WORKDIR /src

# Copy project files
COPY ["MyApi.csproj", "./"]
RUN dotnet restore "MyApi.csproj"

# Copy source and build
COPY . .
RUN dotnet build "MyApi.csproj" -c Release -o /app/build

# Publish stage
FROM builder AS publish
RUN dotnet publish "MyApi.csproj" -c Release -o /app/publish /p:UseAppHost=false

# Runtime stage - minimal image
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app
COPY --from=publish /app/publish .
EXPOSE 8080
ENV ASPNETCORE_URLS=http://+:8080
ENTRYPOINT ["dotnet", "MyApi.dll"]

Why this structure matters: Multi-stage builds reduce your final image size by ~80%. The builder stage includes the entire SDK, but the final runtime stage only includes the ASP.NET Core runtime—much smaller and more secure.

Key Dockerfile Principles

When building production Docker images for C# applications, I always follow these practices:

Use specific SDK versions: Never use latest tags. Pinning to 8.0 ensures reproducible builds across environments.

Leverage .NET's UseAppHost=false: This creates platform-independent binaries that work reliably in containers.

Set proper environment variables: The ASPNETCORE_URLS variable ensures your application listens on the correct port inside the container.

Use non-root users: For production, I always add a non-root user to reduce security risks:

RUN useradd -m -u 1000 appuser && chown -R appuser /app
USER appuser

Phase 2: Local Development with Docker Compose

Docker Compose transforms how I develop applications. Instead of running databases locally and managing multiple services manually, I define everything in a single file.

My Docker Compose Setup

Here's a realistic setup I use for ASP.NET Core projects with multiple services:

version: "3.8"

services:
  api:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8080:8080"
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ConnectionStrings__DefaultConnection=Server=sqlserver;Database=MyAppDb;User Id=sa;Password=YourPassword123!;Encrypt=false;
    depends_on:
      - sqlserver
    volumes:
      - .:/src
    networks:
      - myapp-network

  sqlserver:
    image: mcr.microsoft.com/mssql/server:2022-latest
    environment:
      ACCEPT_EULA: Y
      SA_PASSWORD: YourPassword123!
    ports:
      - "1433:1433"
    volumes:
      - sqlserver-data:/var/opt/mssql
    networks:
      - myapp-network

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis-data:/data
    networks:
      - myapp-network

volumes:
  sqlserver-data:
  redis-data:

networks:
  myapp-network:
    driver: bridge

Why This Approach Works

The beauty of this setup is that it replicates your production environment locally. Every developer on the team runs the exact same services with identical configurations. When I run docker-compose up, I get:

  • An ASP.NET Core API with hot-reload capability
  • SQL Server for persistent data
  • Redis for caching and session management
  • All connected via a shared Docker network

Managing Database Migrations

One challenge I solved was handling Entity Framework migrations in containers. Here's my approach:

  api:
    ...
    entrypoint:
      - /bin/sh
      - -c
      - |
        dotnet ef database update
        dotnet MyApi.dll

This ensures migrations run automatically when the container starts. In production, I handle this differently through dedicated migration jobs.

Phase 3: Building Your Kubernetes Strategy

After mastering Docker Compose for local development, scaling to Kubernetes might seem overwhelming. But understanding the core concepts makes it manageable.

Kubernetes vs Docker Compose

Before jumping to Kubernetes, I always ask: do we need it? Docker Compose works wonderfully for single-server deployments. Kubernetes is necessary when you need:

  • High availability: Automatic container restarts and multi-node deployment
  • Horizontal scaling: Running multiple instances of your application
  • Zero-downtime updates: Rolling deployments without service interruption
  • Complex networking: Load balancing across multiple services

For my current projects, I use Kubernetes because we need automatic failover and the ability to scale API instances based on demand.

Creating Your First Deployment

The transition from Docker Compose to Kubernetes involves translating your compose file into Kubernetes manifests. Here's my ASP.NET Core deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapi-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapi
  template:
    metadata:
      labels:
        app: myapi
    spec:
      containers:
        - name: myapi
          image: myregistry.azurecr.io/myapi:v1.0.0
          ports:
            - containerPort: 8080
          env:
            - name: ASPNETCORE_ENVIRONMENT
              value: "Production"
            - name: ConnectionStrings__DefaultConnection
              valueFrom:
                secretKeyRef:
                  name: db-secret
                  key: connection-string
          resources:
            requests:
              memory: "256Mi"
              cpu: "250m"
            limits:
              memory: "512Mi"
              cpu: "500m"
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 30
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /health/ready
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 5

What each section does:

  • Replicas: 3 — Runs three instances of your application for high availability
  • Resources — Ensures proper container scheduling by declaring CPU and memory needs
  • Probes — Health checks that Kubernetes uses to determine if your container is running properly

Service and Ingress Configuration

A deployment alone doesn't expose your application. I use a Service to enable internal communication and Ingress for external access:

apiVersion: v1
kind: Service
metadata:
  name: myapi-service
spec:
  selector:
    app: myapi
  type: ClusterIP
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapi-ingress
spec:
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: myapi-service
                port:
                  number: 80

Phase 4: Advanced DevOps Patterns

Managing Secrets Securely

Never embed credentials in Docker images or Kubernetes manifests. I always use Kubernetes Secrets:

kubectl create secret generic db-secret \
  --from-literal=connection-string="Server=sqlserver;Database=MyApp;User Id=sa;Password=SecurePassword;"

Implementing CI/CD Pipelines

My deployment workflow uses GitHub Actions to automatically build and push Docker images:

name: Build and Push Docker Image

on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Login to Registry
        uses: docker/login-action@v2
        with:
          registry: myregistry.azurecr.io
          username: ${{ secrets.REGISTRY_USERNAME }}
          password: ${{ secrets.REGISTRY_PASSWORD }}

      - name: Build and Push
        uses: docker/build-push-action@v4
        with:
          context: .
          push: true
          tags: myregistry.azurecr.io/myapi:${{ github.sha }}

Database Considerations

Kubernetes is stateless by nature, so stateful components like databases require special attention. For production, I recommend:

  • Don't run databases in Kubernetes for critical data—use managed services like Azure SQL or AWS RDS
  • Use persistent volumes only for temporary caching or development
  • Implement backup strategies separately from container orchestration

Real-World Lessons Learned

Challenge 1: Configuration Management

In my first production Kubernetes deployment, I hardcoded environment-specific values and quickly realized the nightmare of maintaining multiple manifests. Now I use tools like Helm for templating:

image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
replicas: {{ .Values.replicas }}

Challenge 2: Resource Allocation

Without proper resource requests and limits, Kubernetes containers consume unbounded resources, impacting other applications. I learned this the hard way when a memory leak in one API pod crashed the entire node. Now I always:

  • Define realistic requests based on load testing
  • Set limits at 1.5-2x the requests to allow temporary spikes
  • Monitor actual resource consumption and adjust

Challenge 3: Health Checks

Kubernetes needs to know when your application is healthy. Generic HTTP checks aren't always sufficient. For C# applications, I implement detailed health checks:

app.MapHealthChecks("/health", new HealthCheckOptions
{
    ResponseWriter = WriteResponse
});

services.AddHealthChecks()
    .AddCheck("database", () => CheckDatabaseConnection())
    .AddCheck("redis", () => CheckRedisConnection());

Best Practices from the Field

Version everything: Don't rely on latest tags. Use semantic versioning (v1.2.3) for reproducible deployments.

Implement graceful shutdown: Kubernetes sends SIGTERM before killing containers. Configure your ASP.NET Core app to handle this properly:

var lifetime = app.Lifetime;
lifetime.ApplicationStopping.Register(OnShutdown);

void OnShutdown()
{
    // Complete in-flight requests, close connections, cleanup resources
}

Monitor and log: Use structured logging and ensure your logs reach a centralized system. Kubernetes doesn't persist container logs.

Test your disaster recovery: Regularly simulate failures to ensure your applications recover properly.

Results and Impact

Implementing this architecture across my projects has resulted in:

  • 99.9% uptime through automatic failover and multi-instance deployment
  • 50% faster deployments through automated CI/CD pipelines
  • Significant operational cost reduction by right-sizing resource allocations
  • Team confidence in production deployments through reproducible infrastructure

Moving Forward

The landscape of containerization and orchestration continues to evolve. New tools emerge, best practices refine, and cloud platforms offer increasingly sophisticated managed services. The foundation I've shared—understanding containers, mastering Docker Compose, and implementing Kubernetes strategically—remains relevant across these changes.

What aspects of containerization and orchestration are you implementing in your own projects? I'd love to hear about your experiences and the unique challenges you've overcome.

Built with Nuxt UI • © 2025 Behnam Nouri