CI/CD Pipeline Design: From Code to Production
A well-designed CI/CD pipeline is the backbone of modern software delivery. It automates the journey from a developer committing code to that code running in production — catching bugs early, enforcing quality standards, and enabling rapid, reliable releases. Without CI/CD, teams spend days on manual testing, error-prone deployments, and painful rollbacks.
In this guide, we cover the core concepts of continuous integration and continuous delivery, the stages of a production-grade pipeline, testing strategies, artifact management, and the tools that make it all work. For what happens after the pipeline deploys, see Deployment Strategies and Feature Flags.
CI vs CD vs CD
These three terms are often conflated, but they mean different things:
| Term | Definition | Automation Level |
|---|---|---|
| Continuous Integration (CI) | Automatically build and test every commit merged to the main branch | Build + Test |
| Continuous Delivery (CD) | Automatically prepare every commit for release to production (but deploy manually) | Build + Test + Stage |
| Continuous Deployment (CD) | Automatically deploy every commit that passes all pipeline stages to production | Build + Test + Stage + Deploy |
Most teams start with CI, graduate to continuous delivery, and eventually reach continuous deployment. The jump from delivery to deployment requires high confidence in your automated tests and observability.
Pipeline Stages
A production-grade CI/CD pipeline typically includes these stages:
1. Source Stage
Triggered by a code change — a pull request, a merge to main, or a tag. The pipeline fetches the source code and prepares the build environment.
# GitHub Actions trigger example
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
2. Build Stage
Compile the code, resolve dependencies, and produce build artifacts. For containerized applications, this means building a Docker image:
# Multi-stage Dockerfile for optimized builds
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --production=false
COPY . .
RUN npm run build
FROM node:20-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/server.js"]
3. Test Stage
The test stage is the heart of CI. It runs multiple layers of tests:
# Parallel test execution in GitHub Actions
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm run test:unit -- --coverage
- uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage/
integration-tests:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: testdb
POSTGRES_PASSWORD: test
ports:
- 5432:5432
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm run test:integration
env:
DATABASE_URL: postgresql://postgres:test@localhost:5432/testdb
e2e-tests:
runs-on: ubuntu-latest
needs: [unit-tests, integration-tests]
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npx playwright install
- run: npm run test:e2e
4. Security Scanning
Integrate security checks into your pipeline to catch vulnerabilities early:
- SAST (Static Application Security Testing): Analyze source code for vulnerabilities
- SCA (Software Composition Analysis): Scan dependencies for known CVEs
- Container scanning: Check Docker images for vulnerabilities
- Secret detection: Ensure no credentials are committed
5. Artifact Management
Store versioned build artifacts for deployment. Tag images with the Git SHA for traceability:
# Tag and push Docker image
docker build -t my-app:${GIT_SHA} .
docker tag my-app:${GIT_SHA} registry.example.com/my-app:${GIT_SHA}
docker push registry.example.com/my-app:${GIT_SHA}
# Also tag as latest for convenience
docker tag my-app:${GIT_SHA} registry.example.com/my-app:latest
docker push registry.example.com/my-app:latest
6. Deploy Stage
Deploy the artifact to the target environment. Use the appropriate deployment strategy for your risk tolerance:
# Kubernetes deployment with kubectl
kubectl set image deployment/my-app my-app=registry.example.com/my-app:${GIT_SHA}
# Wait for rollout to complete
kubectl rollout status deployment/my-app --timeout=300s
# Verify deployment
kubectl get pods -l app=my-app
Testing Strategy: The Testing Pyramid
A balanced testing strategy follows the testing pyramid:
| Layer | Speed | Scope | Count | Runs When |
|---|---|---|---|---|
| Unit Tests | Milliseconds | Single function/class | Thousands | Every commit |
| Integration Tests | Seconds | Component interactions | Hundreds | Every commit |
| E2E Tests | Minutes | Full user workflows | Dozens | Before deploy |
| Performance Tests | Minutes-Hours | System under load | Handful | Before release |
Environment Promotion
Code should flow through environments in a consistent order. Each environment has a specific purpose:
- Development: Developer sandbox for local testing
- CI: Automated test environment (ephemeral)
- Staging: Production mirror for final validation
- Production: Live user-facing environment
The same artifact (Docker image, JAR, binary) must be promoted through all environments — never rebuild for each environment. Configuration differences should come from environment variables or config maps.
Infrastructure as Code
Your infrastructure should be defined in code, version-controlled, and deployed through the same pipeline as your application. Common IaC tools include:
- Terraform: Cloud-agnostic infrastructure provisioning
- Pulumi: Infrastructure as code using programming languages
- AWS CDK: AWS infrastructure using TypeScript, Python, etc.
- Helm: Kubernetes package management
GitOps
GitOps takes infrastructure as code further by using Git as the single source of truth for both application and infrastructure state. Tools like ArgoCD and Flux watch a Git repository and automatically reconcile the cluster state to match what is declared in Git:
# ArgoCD Application manifest
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/org/my-app-manifests
targetRevision: main
path: overlays/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
CI/CD Tools Comparison
| Tool | Type | Config Format | Best For |
|---|---|---|---|
| GitHub Actions | Cloud | YAML | GitHub-hosted projects |
| GitLab CI | Cloud / Self-hosted | YAML | GitLab-hosted projects |
| Jenkins | Self-hosted | Groovy (Jenkinsfile) | Complex enterprise pipelines |
| ArgoCD | Self-hosted | YAML | Kubernetes GitOps |
| CircleCI | Cloud | YAML | Fast parallel builds |
| Azure Pipelines | Cloud | YAML | Azure ecosystem |
Pipeline Best Practices
- Keep pipelines fast: Target under 10 minutes for CI. Use caching, parallelism, and incremental builds
- Fail fast: Run cheap checks (linting, type checking) before expensive ones (e2e tests)
- Make pipelines reproducible: Pin dependency versions, use lock files, specify exact tool versions
- Cache aggressively: Cache dependencies, Docker layers, and test databases
- Use branch protection: Require passing CI before merging pull requests
- Monitor pipeline health: Track build times, failure rates, and flaky tests
- Treat pipeline code like application code: Review changes, test in a branch, document behavior
A great CI/CD pipeline is invisible — developers push code and it flows to production without friction. But building that pipeline requires deliberate design, continuous improvement, and a commitment to automation. Start simple, add stages as your confidence grows, and invest in observability so you always know what your pipeline is doing. For the next step after your pipeline deploys, explore deployment strategies and Kubernetes architecture.