Policies Guide
Lynq provides fine-grained control over resource lifecycle through four policy types. This guide explains each policy and when to use them.
Policy Types Overview
| Policy | Controls | Default | Options |
|---|---|---|---|
| CreationPolicy | When resources are created | WhenNeeded | Once, WhenNeeded |
| DeletionPolicy | What happens on delete | Delete | Delete, Retain |
| ConflictPolicy | Ownership conflict handling | Stuck | Stuck, Force |
| PatchStrategy | How resources are updated | apply | apply, merge, replace |
Practical Examples
See Policy Combinations Examples for detailed real-world scenarios with diagrams and step-by-step explanations.
Field-Level Control (v1.1.4+)
For fine-grained control over specific fields while using WhenNeeded, see Field-Level Ignore Control. This allows you to selectively ignore certain fields during reconciliation (e.g., HPA-managed replicas).
CreationPolicy
Controls when a resource is created or re-applied.
WhenNeeded (Default)
Resource is created and updated whenever the spec changes.
deployments:
- id: app
creationPolicy: WhenNeeded # Default
nameTemplate: "{{ .uid }}-app"
spec:
# ... deployment specBehavior:
- ✅ Creates resource if it doesn't exist
- ✅ Updates resource when spec changes
- ✅ Re-applies if manually deleted
- ✅ Maintains desired state continuously
Use when:
- Resources should stay synchronized with templates
- You want drift correction
- Resource state should match database
Example: Application deployments, services, configmaps
Alternative: Use ignoreFields
If you need to update most fields but ignore specific ones (e.g., replicas controlled by HPA), consider using creationPolicy: WhenNeeded with ignoreFields instead of using Once. This provides more flexibility while still allowing selective field updates. See Field-Level Ignore Control for details.
Once
Resource is created only once and never updated, even if spec changes.
jobs:
- id: init-job
creationPolicy: Once
nameTemplate: "{{ .uid }}-init"
spec:
apiVersion: batch/v1
kind: Job
spec:
template:
spec:
containers:
- name: init
image: busybox
command: ["sh", "-c", "echo Initializing node {{ .uid }}"]
restartPolicy: NeverBehavior:
- ✅ Creates resource on first reconciliation
- ❌ Never updates resource, even if template changes
- ✅ Skips if resource already exists with
lynq.sh/created-onceannotation - ✅ Re-creates if manually deleted
Use when:
- One-time initialization tasks
- Security resources that shouldn't change
- Database migrations
- Initial setup jobs
Example: Init Jobs, security configurations, bootstrap scripts
Annotation Added:
metadata:
annotations:
lynq.sh/created-once: "true"DeletionPolicy
Controls what happens to resources when a LynqNode CR is deleted.
Delete (Default)
Resources are deleted when the Node is deleted via ownerReference.
deployments:
- id: app
deletionPolicy: Delete # Default
nameTemplate: "{{ .uid }}-app"
spec:
# ... deployment specBehavior:
- ✅ Removes resource from cluster automatically
- ✅ Uses ownerReference for garbage collection
- ✅ No orphaned resources
Use when:
- Resources are node-specific and should be removed
- You want complete cleanup
- Resources have no value after node deletion
Example: Deployments, Services, ConfigMaps, Secrets
Retain
Resources are kept in the cluster and never have ownerReference set (use label-based tracking instead).
persistentVolumeClaims:
- id: data-pvc
deletionPolicy: Retain
nameTemplate: "{{ .uid }}-data"
spec:
apiVersion: v1
kind: PersistentVolumeClaim
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10GiBehavior:
- ✅ No ownerReference (label-based tracking only)
- ✅ Resource stays in cluster even when Node is deleted
- ✅ Orphan labels added on deletion for identification
- ❌ No automatic cleanup by Kubernetes garbage collector
- ⚠️ Manual deletion required
Why no ownerReference?
Setting ownerReference would cause Kubernetes garbage collector to automatically delete the resource when the LynqNode CR is deleted, regardless of DeletionPolicy. The operator evaluates DeletionPolicy at resource creation time and uses label-based tracking (lynq.sh/node, lynq.sh/node-namespace) instead of ownerReference for Retain resources.
Use when:
- Data must survive node deletion
- Resources are expensive to recreate
- Regulatory/compliance requirements
- Debugging or forensics needed
Example: PersistentVolumeClaims, backup resources, audit logs
Orphan Markers:
When resources are retained (DeletionPolicy=Retain), they are automatically marked for easy identification:
metadata:
labels:
lynq.sh/orphaned: "true" # Label for selector queries
annotations:
lynq.sh/orphaned-at: "2025-01-15T10:30:00Z" # RFC3339 timestamp
lynq.sh/orphaned-reason: "RemovedFromTemplate" # or "LynqNodeDeleted"Finding orphaned resources:
# Find all orphaned resources
kubectl get all -A -l lynq.sh/orphaned=true
# Find resources orphaned due to template changes
kubectl get all -A -l lynq.sh/orphaned=true \
-o jsonpath='{range .items[?(@.metadata.annotations.k8s-lynq\.org/orphaned-reason=="RemovedFromTemplate")]}{.kind}/{.metadata.name}{"\n"}{end}'
# Find resources orphaned due to node deletion
kubectl get all -A -l lynq.sh/orphaned=true \
-o jsonpath='{range .items[?(@.metadata.annotations.k8s-lynq\.org/orphaned-reason=="LynqNodeDeleted")]}{.kind}/{.metadata.name}{"\n"}{end}'Orphan Resource Cleanup
Dynamic Template Evolution
DeletionPolicy applies not only when a LynqNode CR is deleted, but also when resources are removed from the LynqForm.
How it works:
The operator tracks all applied resources in status.appliedResources with keys in format kind/namespace/name@id. During each reconciliation:
- Detect Orphans: Compares current template resources with previously applied resources
- Respect Policy: Applies the resource's
deletionPolicysetting - Update Status: Tracks the new set of applied resources
Orphan Lifecycle - Re-adoption:
If you re-add a previously removed resource to the template, the operator automatically:
- Removes all orphan markers (label + annotations)
- Re-applies tracking labels or ownerReferences based on current DeletionPolicy
- Resumes full management of the resource
This means you can safely experiment with template changes:
- Remove a resource → It becomes orphaned (if Retain policy)
- Re-add the same resource → It's cleanly re-adopted into management
- No manual cleanup or label management needed!
Protecting LynqNodes from Cascade Deletion
Cascading deletions are immediate
Deleting a LynqHub or LynqForm cascades to all LynqNode CRs, which in turn deletes managed resources unless retention policies are set.
The Problem
Recommended Solution: Use Retain DeletionPolicy
Before deleting LynqHub or LynqForm, ensure all resources in your templates use deletionPolicy: Retain:
apiVersion: operator.lynq.sh/v1
kind: LynqForm
metadata:
name: my-template
spec:
hubId: my-hub
# Set Retain for ALL resources
deployments:
- id: app
deletionPolicy: Retain # ✅ Keeps deployment
nameTemplate: "{{ .uid }}-app"
spec:
# ... deployment spec
services:
- id: svc
deletionPolicy: Retain # ✅ Keeps service
nameTemplate: "{{ .uid }}-svc"
spec:
# ... service spec
persistentVolumeClaims:
- id: data
deletionPolicy: Retain # ✅ Keeps PVC and data
nameTemplate: "{{ .uid }}-data"
spec:
# ... PVC specWhy This Works
With deletionPolicy: Retain:
- At creation time: Resources are created with label-based tracking only (NO ownerReference)
- Even if LynqHub/LynqForm is deleted → LynqNode CRs are deleted
- When LynqNode CRs are deleted → Resources stay in cluster (no ownerReference = no automatic deletion)
- Finalizer adds orphan labels for easy identification
- Resources stay in the cluster because Kubernetes garbage collector never marks them for deletion
Key insight: DeletionPolicy is evaluated when creating resources, not when deleting them. This prevents the Kubernetes garbage collector from auto-deleting Retain resources.
When to Use This Strategy
✅ Use Retain when:
- You need to delete/recreate LynqHub for migration
- You're updating LynqForm with breaking changes
- You're testing hub configuration changes
- You have production LynqNodes that must not be interrupted
- You're performing maintenance on operator components
❌ Don't use Retain when:
- You actually want to clean up all node resources
- Testing in development environments
- You have backup/restore procedures in place
Alternative: Update Instead of Delete
Instead of deleting and recreating, consider:
# ❌ DON'T: Delete and recreate (causes cascade deletion)
kubectl delete lynqhub my-hub
kubectl apply -f updated-hub.yaml
# ✅ DO: Update in place
kubectl apply -f updated-hub.yamlConflictPolicy
Controls what happens when a resource already exists with a different owner or field manager.
Stuck (Default)
Reconciliation stops if ownership conflict is detected.
services:
- id: app-svc
conflictPolicy: Stuck # Default
nameTemplate: "{{ .uid }}-app"
spec:
# ... service specBehavior:
- ✅ Fails safe - doesn't overwrite existing resources
- ❌ Stops reconciliation on conflict
- 📢 Emits
ResourceConflictevent - ⚠️ Marks Node as Degraded
Use when:
- Safety is paramount
- You want to investigate conflicts manually
- Resources might be managed by other controllers
- Default case (most conservative)
Example: Any resource where safety > availability
Force
Attempts to take ownership using Server-Side Apply with force=true.
deployments:
- id: app
conflictPolicy: Force
nameTemplate: "{{ .uid }}-app"
spec:
# ... deployment specBehavior:
- ✅ Takes ownership forcefully
- ⚠️ May overwrite other controllers' changes
- ✅ Reconciliation continues
- 📢 Emits events on success/failure
Use when:
- Lynq should be the source of truth
- Conflicts are expected and acceptable
- You're migrating from another management system
- Availability > safety
Example: Resources exclusively managed by Lynq
Warning: This can override changes from other controllers or users!
PatchStrategy
Controls how resources are updated.
apply (Default - Server-Side Apply)
Uses Kubernetes Server-Side Apply for declarative updates.
deployments:
- id: app
patchStrategy: apply # Default
nameTemplate: "{{ .uid }}-app"
spec:
# ... deployment specBehavior:
- ✅ Declarative updates
- ✅ Conflict detection
- ✅ Preserves fields managed by other controllers
- ✅ Field-level ownership tracking
- ✅ Most efficient
Use when:
- Multiple controllers manage the same resource
- You want Kubernetes-native updates
- Default case (best practice)
Field Manager: lynq
merge (Strategic Merge Patch)
Uses strategic merge patch for updates.
services:
- id: app-svc
patchStrategy: merge
nameTemplate: "{{ .uid }}-app"
spec:
# ... service specBehavior:
- ✅ Merges changes with existing resource
- ✅ Preserves unspecified fields
- ⚠️ Less precise conflict detection
- ✅ Works with older Kubernetes versions
Use when:
- Partial updates needed
- Compatibility with legacy systems
- Strategic merge semantics preferred
replace (Full Replacement)
Completely replaces the resource.
configMaps:
- id: config
patchStrategy: replace
nameTemplate: "{{ .uid }}-config"
spec:
# ... configmap specBehavior:
- ⚠️ Replaces entire resource
- ❌ Loses fields not in template
- ✅ Guarantees exact match
- ✅ Handles resourceVersion conflicts
Use when:
- Exact resource state required
- No other controllers manage the resource
- Complete replacement is intentional
Warning: This removes any fields not in your template!
Default Values
If policies are not specified, these defaults apply:
resources:
- id: example
creationPolicy: WhenNeeded # ✅ Default
deletionPolicy: Delete # ✅ Default
conflictPolicy: Stuck # ✅ Default
patchStrategy: apply # ✅ DefaultPolicy Decision Matrix
Recommended policy combinations by resource type:
| Resource Type | CreationPolicy | DeletionPolicy | ConflictPolicy | PatchStrategy |
|---|---|---|---|---|
| Deployment | WhenNeeded | Delete | Stuck | apply |
| Service | WhenNeeded | Delete | Stuck | apply |
| ConfigMap | WhenNeeded | Delete | Stuck | apply |
| Secret | WhenNeeded | Delete | Force | apply |
| PVC | Once | Retain | Stuck | apply |
| Init Job | Once | Delete | Force | replace |
| Namespace | WhenNeeded | Retain | Force | apply |
| Ingress | WhenNeeded | Delete | Stuck | apply |
See Detailed Examples
For in-depth explanations with diagrams and scenarios, see Policy Combinations Examples.
Observability
Events
Policies trigger various events:
# View Node events
kubectl describe lynqnode <lynqnode-name>Conflict Events:
ResourceConflict: Resource conflict detected for default/acme-app (Kind: Deployment, Policy: Stuck)Deletion Events:
LynqNodeDeleting: Deleting LynqNode 'acme-prod-template' (template: prod-template, uid: acme)
LynqNodeDeleted: Successfully deleted LynqNode 'acme-prod-template'Metrics
# Count apply attempts by policy
apply_attempts_total{kind="Deployment",result="success",conflict_policy="Stuck"}
# Track conflicts
lynqnode_conflicts_total{lynqnode="acme-web",conflict_policy="Stuck"}
# Failed reconciliations
lynqnode_reconcile_duration_seconds{result="error"}See Monitoring Guide for complete metrics reference.
Troubleshooting
Conflict Stuck
Symptom: LynqNode shows Degraded condition
Cause: Resource exists with different owner
Solution:
Check who owns the resource:
bashkubectl get <resource-type> <resource-name> -o yaml | grep -A5 managedFieldsEither:
- Delete the conflicting resource
- Change to
conflictPolicy: Force - Use unique
nameTemplate
Resource Not Updating
Symptom: Changes to template don't apply
Cause: creationPolicy: Once is set
Solution:
- Change to
creationPolicy: WhenNeeded, or - Delete the resource to force recreation, or
- This is expected behavior for
Oncepolicy
Resource Not Deleted
Symptom: Resource remains after LynqNode deletion
Cause: deletionPolicy: Retain is set
Solution:
- Manually delete:
kubectl delete <resource-type> <resource-name> - This is expected behavior for
Retainpolicy
See Also
- Policy Combinations Examples - Detailed real-world scenarios with diagrams
- Field-Level Ignore Control - Fine-grained field management
- Template Guide - Template syntax and functions
- Dependencies Guide - Resource ordering
- Troubleshooting - Common issues
