Kubernetes: Estrategias de Despliegue en Producción
Kubernetes se ha convertido en el estándar de facto para la orquestación de contenedores. Esta guía cubre las estrategias de despliegue más importantes para aplicaciones en producción.
Fundamentos de Kubernetes
Arquitectura del Cluster
# namespace.yaml - Organización por entornos
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
environment: prod
monitoring: enabled
---
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
environment: staging
monitoring: enabled
ConfigMaps y Secrets
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: production
data:
database.host: "db.production.svc.cluster.local"
redis.host: "redis.production.svc.cluster.local"
app.log.level: "info"
---
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
namespace: production
type: Opaque
data:
database-password: <base64-encoded-password>
api-key: <base64-encoded-api-key>
Estrategias de Despliegue
Rolling Update (Actualización Gradual)
# deployment-rolling.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-rolling
namespace: production
spec:
replicas: 6
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2 # Máximo 2 pods adicionales
maxUnavailable: 1 # Máximo 1 pod no disponible
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
version: v2.1.0
spec:
containers:
- name: app
image: myapp:v2.1.0
ports:
- containerPort: 3000
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
Blue-Green Deployment
# blue-green-service.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: production
spec:
selector:
app: myapp
version: blue # Cambiar a 'green' para switch
ports:
- port: 80
targetPort: 3000
type: ClusterIP
---
# deployment-blue.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-blue
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: blue
template:
metadata:
labels:
app: myapp
version: blue
spec:
containers:
- name: app
image: myapp:v2.0.0
ports:
- containerPort: 3000
---
# deployment-green.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-green
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: green
template:
metadata:
labels:
app: myapp
version: green
spec:
containers:
- name: app
image: myapp:v2.1.0
ports:
- containerPort: 3000
Canary Deployment con Istio
# canary-destinationrule.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: app-destination
namespace: production
spec:
host: app-service
subsets:
- name: stable
labels:
version: stable
- name: canary
labels:
version: canary
---
# canary-virtualservice.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: app-canary
namespace: production
spec:
http:
- match:
- headers:
canary:
exact: "true"
route:
- destination:
host: app-service
subset: canary
- route:
- destination:
host: app-service
subset: stable
weight: 90
- destination:
host: app-service
subset: canary
weight: 10 # 10% del tráfico a canary
Gestión de Recursos y Scaling
Horizontal Pod Autoscaler
# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: app-hpa
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app-deployment
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 100
periodSeconds: 15
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
Vertical Pod Autoscaler
# vpa.yaml
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: app-vpa
namespace: production
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: app-deployment
updatePolicy:
updateMode: "Auto"
resourcePolicy:
containerPolicies:
- containerName: app
maxAllowed:
cpu: 1000m
memory: 2Gi
minAllowed:
cpu: 100m
memory: 128Mi
Networking y Service Mesh
Ingress con NGINX
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: production
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
tls:
- hosts:
- myapp.empresa.com
secretName: app-tls
rules:
- host: myapp.empresa.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
Network Policies
# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-network-policy
namespace: production
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: nginx-ingress
ports:
- protocol: TCP
port: 3000
egress:
- to:
- namespaceSelector:
matchLabels:
name: database
ports:
- protocol: TCP
port: 5432
- to: []
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
Monitoreo y Observabilidad
Prometheus ServiceMonitor
# servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: app-metrics
namespace: production
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
endpoints:
- port: metrics
interval: 30s
path: /metrics
honorLabels: true
Grafana Dashboard ConfigMap
# dashboard-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-dashboard
namespace: monitoring
labels:
grafana_dashboard: "1"
data:
dashboard.json: |
{
"dashboard": {
"id": null,
"title": "App Metrics",
"tags": ["kubernetes", "app"],
"panels": [
{
"title": "Request Rate",
"type": "graph",
"targets": [
{
"expr": "rate(http_requests_total[5m])",
"legendFormat": "{{method}} {{status}}"
}
]
}
]
}
}
Seguridad y RBAC
Service Account y RBAC
# rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-serviceaccount
namespace: production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
namespace: production
rules:
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-rolebinding
namespace: production
subjects:
- kind: ServiceAccount
name: app-serviceaccount
namespace: production
roleRef:
kind: Role
name: app-role
apiGroup: rbac.authorization.k8s.io
Pod Security Standards
# pod-security.yaml
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
CI/CD Pipeline con GitOps
ArgoCD Application
# argocd-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp-production
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/empresa/k8s-manifests
targetRevision: HEAD
path: production/myapp
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
Backup y Disaster Recovery
Velero Backup
# backup-schedule.yaml
apiVersion: velero.io/v1
kind: Schedule
metadata:
name: daily-backup
namespace: velero
spec:
schedule: "0 2 * * *" # Diario a las 2:00 AM
template:
includedNamespaces:
- production
- staging
storageLocation: default
volumeSnapshotLocations:
- default
ttl: 720h # 30 días
Conclusión
Kubernetes en producción requiere:
- Estrategias de despliegue apropiadas para cada caso
- Autoscaling horizontal y vertical eficiente
- Networking seguro con service mesh
- Monitoreo y observabilidad completos
- RBAC y políticas de seguridad estrictas
- Pipelines GitOps automatizados
- Estrategias sólidas de backup
Implementar estas prácticas garantiza aplicaciones resilientes y escalables en Kubernetes.