Skip to content
GitHub stars

Policies Guide ​

Lynq provides fine-grained control over resource lifecycle through four policy types. This guide explains each policy and when to use them.

Policy Types Overview ​

PolicyControlsDefaultOptions
CreationPolicyWhen resources are createdWhenNeededOnce, WhenNeeded
DeletionPolicyWhat happens on deleteDeleteDelete, Retain
ConflictPolicyOwnership conflict handlingStuckStuck, Force
PatchStrategyHow resources are updatedapplyapply, merge, replace

Practical Examples

See Policy Combinations Examples for detailed real-world scenarios with diagrams and step-by-step explanations.

Field-Level Control (v1.1.4+)

For fine-grained control over specific fields while using WhenNeeded, see Field-Level Ignore Control. This allows you to selectively ignore certain fields during reconciliation (e.g., HPA-managed replicas).

CreationPolicy ​

Controls when a resource is created or re-applied.

CreationPolicy Flow Visualizer

Delete or recreate resources to see how changes cascade through the system. Watch how finalizers ensure proper cleanup order. Use "Make Drift" to see how WhenNeeded (Watch) and Once (Static) resources behave differently.

πŸ—„οΈ
Database
External Data Source
Always Active
acme-corpacme.comβœ“ active
πŸ“‹
LynqHub
Syncs every 30s
Active
πŸ“„
LynqForm
Template (v1)
Active
🏒
LynqNode
acme-corp (v1)
Active
βš™οΈ
Cluster Resources
Kubernetes Objects
Deploymentacme-apiv1
πŸ‘οΈWatchActive
Serviceacme-svcv1
πŸ‘οΈWatchActive
Ingressacme-ingv1
πŸ‘οΈWatchActive
Jobacme-init-jobv1
πŸ”’StaticRetainActive
πŸ‘οΈWatch Continuously syncs with LynqNode changes (CreationPolicy=WhenNeeded)
πŸ”’Static No sync after creation (CreationPolicy=Once)
Event Log0

WhenNeeded (Default) ​

Resource is created and updated whenever the spec changes.

yaml
deployments:
  - id: app
    creationPolicy: WhenNeeded  # Default
    nameTemplate: "{{ .uid }}-app"
    spec:
      # ... deployment spec

Behavior:

  • βœ… Creates resource if it doesn't exist
  • βœ… Updates resource when spec changes
  • βœ… Re-applies if manually deleted
  • βœ… Maintains desired state continuously

Use when:

  • Resources should stay synchronized with templates
  • You want drift correction
  • Resource state should match database

Example: Application deployments, services, configmaps

Alternative: Use ignoreFields

If you need to update most fields but ignore specific ones (e.g., replicas controlled by HPA), consider using creationPolicy: WhenNeeded with ignoreFields instead of using Once. This provides more flexibility while still allowing selective field updates. See Field-Level Ignore Control for details.

Once ​

Resource is created only once and never updated, even if spec changes.

yaml
jobs:
  - id: init-job
    creationPolicy: Once
    nameTemplate: "{{ .uid }}-init"
    spec:
      apiVersion: batch/v1
      kind: Job
      spec:
        template:
          spec:
            containers:
            - name: init
              image: busybox
              command: ["sh", "-c", "echo Initializing node {{ .uid }}"]
            restartPolicy: Never

Behavior:

  • βœ… Creates resource on first reconciliation
  • ❌ Never updates resource, even if template changes
  • βœ… Skips if resource already exists with lynq.sh/created-once annotation
  • βœ… Re-creates if manually deleted

Use when:

  • One-time initialization tasks
  • Security resources that shouldn't change
  • Database migrations
  • Initial setup jobs

Example: Init Jobs, security configurations, bootstrap scripts

Annotation Added:

yaml
metadata:
  annotations:
    lynq.sh/created-once: "true"

DeletionPolicy ​

Controls what happens to resources when a LynqNode CR is deleted.

DeletionPolicy Flow Visualizer

Remove resources from LynqForm template or delete LynqNode to see how DeletionPolicy controls cleanup behavior. Compare Delete (automatic removal) vs Retain (keeps in cluster with orphan markers).

πŸ“„
LynqForm
Template Definition
πŸ“¦DeploymentπŸ—‘οΈDelete
πŸ’ΎPVCπŸ”’Retain
Active
🏒
LynqNode
acme-corp
Active
βš™οΈ
Cluster Resources
Kubernetes Objects
Deploymentacme-api
πŸ—‘οΈDeleteownerRefActive
PVCacme-data
πŸ”’RetainlabelsActive
πŸ—‘οΈDelete Automatic cleanup via ownerReference (Kubernetes garbage collector)
πŸ”’Retain Stays in cluster with orphan markers (label-based tracking, no ownerReference)
Event Log0

Delete (Default) ​

Resources are deleted when the Node is deleted via ownerReference.

yaml
deployments:
  - id: app
    deletionPolicy: Delete  # Default
    nameTemplate: "{{ .uid }}-app"
    spec:
      # ... deployment spec

Behavior:

  • βœ… Removes resource from cluster automatically
  • βœ… Uses ownerReference for garbage collection
  • βœ… No orphaned resources

Use when:

  • Resources are node-specific and should be removed
  • You want complete cleanup
  • Resources have no value after node deletion

Example: Deployments, Services, ConfigMaps, Secrets

Retain ​

Resources are kept in the cluster and never have ownerReference set (use label-based tracking instead).

yaml
persistentVolumeClaims:
  - id: data-pvc
    deletionPolicy: Retain
    nameTemplate: "{{ .uid }}-data"
    spec:
      apiVersion: v1
      kind: PersistentVolumeClaim
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 10Gi

Behavior:

  • βœ… No ownerReference (label-based tracking only)
  • βœ… Resource stays in cluster even when Node is deleted
  • βœ… Orphan labels added on deletion for identification
  • ❌ No automatic cleanup by Kubernetes garbage collector
  • ⚠️ Manual deletion required

Why no ownerReference?

Setting ownerReference would cause Kubernetes garbage collector to automatically delete the resource when the LynqNode CR is deleted, regardless of DeletionPolicy. The operator evaluates DeletionPolicy at resource creation time and uses label-based tracking (lynq.sh/node, lynq.sh/node-namespace) instead of ownerReference for Retain resources.

Use when:

  • Data must survive node deletion
  • Resources are expensive to recreate
  • Regulatory/compliance requirements
  • Debugging or forensics needed

Example: PersistentVolumeClaims, backup resources, audit logs

Orphan Markers:

When resources are retained (DeletionPolicy=Retain), they are automatically marked for easy identification:

yaml
metadata:
  labels:
    lynq.sh/orphaned: "true"  # Label for selector queries
  annotations:
    lynq.sh/orphaned-at: "2025-01-15T10:30:00Z"  # RFC3339 timestamp
    lynq.sh/orphaned-reason: "RemovedFromTemplate"  # or "LynqNodeDeleted"

Finding orphaned resources:

bash
# Find all orphaned resources
kubectl get all -A -l lynq.sh/orphaned=true

# Find resources orphaned due to template changes
kubectl get all -A -l lynq.sh/orphaned=true \
  -o jsonpath='{range .items[?(@.metadata.annotations.k8s-lynq\.org/orphaned-reason=="RemovedFromTemplate")]}{.kind}/{.metadata.name}{"\n"}{end}'

# Find resources orphaned due to node deletion
kubectl get all -A -l lynq.sh/orphaned=true \
  -o jsonpath='{range .items[?(@.metadata.annotations.k8s-lynq\.org/orphaned-reason=="LynqNodeDeleted")]}{.kind}/{.metadata.name}{"\n"}{end}'

Orphan Resource Cleanup ​

Dynamic Template Evolution

DeletionPolicy applies not only when a LynqNode CR is deleted, but also when resources are removed from the LynqForm.

How it works:

The operator tracks all applied resources in status.appliedResources with keys in format kind/namespace/name@id. During each reconciliation:

  1. Detect Orphans: Compares current template resources with previously applied resources
  2. Respect Policy: Applies the resource's deletionPolicy setting
  3. Update Status: Tracks the new set of applied resources

Orphan Lifecycle - Re-adoption:

If you re-add a previously removed resource to the template, the operator automatically:

  1. Removes all orphan markers (label + annotations)
  2. Re-applies tracking labels or ownerReferences based on current DeletionPolicy
  3. Resumes full management of the resource

This means you can safely experiment with template changes:

  • Remove a resource β†’ It becomes orphaned (if Retain policy)
  • Re-add the same resource β†’ It's cleanly re-adopted into management
  • No manual cleanup or label management needed!

Protecting LynqNodes from Cascade Deletion ​

Cascading deletions are immediate

Deleting a LynqHub or LynqForm cascades to all LynqNode CRs, which in turn deletes managed resources unless retention policies are set.

The Problem ​

Before deleting LynqHub or LynqForm, ensure all resources in your templates use deletionPolicy: Retain:

yaml
apiVersion: operator.lynq.sh/v1
kind: LynqForm
metadata:
  name: my-template
spec:
  hubId: my-hub

  # Set Retain for ALL resources
  deployments:
    - id: app
      deletionPolicy: Retain  # βœ… Keeps deployment
      nameTemplate: "{{ .uid }}-app"
      spec:
        # ... deployment spec

  services:
    - id: svc
      deletionPolicy: Retain  # βœ… Keeps service
      nameTemplate: "{{ .uid }}-svc"
      spec:
        # ... service spec

  persistentVolumeClaims:
    - id: data
      deletionPolicy: Retain  # βœ… Keeps PVC and data
      nameTemplate: "{{ .uid }}-data"
      spec:
        # ... PVC spec

Why This Works ​

With deletionPolicy: Retain:

  1. At creation time: Resources are created with label-based tracking only (NO ownerReference)
  2. Even if LynqHub/LynqForm is deleted β†’ LynqNode CRs are deleted
  3. When LynqNode CRs are deleted β†’ Resources stay in cluster (no ownerReference = no automatic deletion)
  4. Finalizer adds orphan labels for easy identification
  5. Resources stay in the cluster because Kubernetes garbage collector never marks them for deletion

Key insight: DeletionPolicy is evaluated when creating resources, not when deleting them. This prevents the Kubernetes garbage collector from auto-deleting Retain resources.

When to Use This Strategy ​

βœ… Use Retain when:

  • You need to delete/recreate LynqHub for migration
  • You're updating LynqForm with breaking changes
  • You're testing hub configuration changes
  • You have production LynqNodes that must not be interrupted
  • You're performing maintenance on operator components

❌ Don't use Retain when:

  • You actually want to clean up all node resources
  • Testing in development environments
  • You have backup/restore procedures in place

Alternative: Update Instead of Delete ​

Instead of deleting and recreating, consider:

bash
# ❌ DON'T: Delete and recreate (causes cascade deletion)
kubectl delete lynqhub my-hub
kubectl apply -f updated-hub.yaml

# βœ… DO: Update in place
kubectl apply -f updated-hub.yaml

ConflictPolicy ​

Controls what happens when a resource already exists with a different owner or field manager.

ConflictPolicy Flow Visualizer

Trigger ownership conflicts to see how ConflictPolicy handles conflicts. Compare Stuck (safe halt) vs Force (SSA force takeover).

πŸ“„
LynqForm
Template Definition
🌐Service⚠️Stuck
πŸ“¦Deployment⚑Force
Active
🏒
LynqNode
acme-corp
Healthy
βš™οΈ
Cluster Resources
Kubernetes Objects
Serviceacme-svc
⚠️StuckActive
Deploymentacme-api
⚑ForceActive
⚠️Stuck Halts reconciliation on conflict, marks LynqNode as Degraded (safe but requires manual fix)
⚑Force Uses SSA force=true to take ownership (aggressive, keeps reconciliation healthy)
Event Log0

Stuck (Default) ​

Reconciliation stops if ownership conflict is detected.

yaml
services:
  - id: app-svc
    conflictPolicy: Stuck  # Default
    nameTemplate: "{{ .uid }}-app"
    spec:
      # ... service spec

Behavior:

  • βœ… Fails safe - doesn't overwrite existing resources
  • ❌ Stops reconciliation on conflict
  • πŸ“’ Emits ResourceConflict event
  • ⚠️ Marks Node as Degraded

Use when:

  • Safety is paramount
  • You want to investigate conflicts manually
  • Resources might be managed by other controllers
  • Default case (most conservative)

Example: Any resource where safety > availability

Force ​

Attempts to take ownership using Server-Side Apply with force=true.

yaml
deployments:
  - id: app
    conflictPolicy: Force
    nameTemplate: "{{ .uid }}-app"
    spec:
      # ... deployment spec

Behavior:

  • βœ… Takes ownership forcefully
  • ⚠️ May overwrite other controllers' changes
  • βœ… Reconciliation continues
  • πŸ“’ Emits events on success/failure

Use when:

  • Lynq should be the source of truth
  • Conflicts are expected and acceptable
  • You're migrating from another management system
  • Availability > safety

Example: Resources exclusively managed by Lynq

Warning: This can override changes from other controllers or users!

PatchStrategy ​

Controls how resources are updated.

apply (Default - Server-Side Apply) ​

Uses Kubernetes Server-Side Apply for declarative updates.

yaml
deployments:
  - id: app
    patchStrategy: apply  # Default
    nameTemplate: "{{ .uid }}-app"
    spec:
      # ... deployment spec

Behavior:

  • βœ… Declarative updates
  • βœ… Conflict detection
  • βœ… Preserves fields managed by other controllers
  • βœ… Field-level ownership tracking
  • βœ… Most efficient

Use when:

  • Multiple controllers manage the same resource
  • You want Kubernetes-native updates
  • Default case (best practice)

Field Manager: lynq

merge (Strategic Merge Patch) ​

Uses strategic merge patch for updates.

yaml
services:
  - id: app-svc
    patchStrategy: merge
    nameTemplate: "{{ .uid }}-app"
    spec:
      # ... service spec

Behavior:

  • βœ… Merges changes with existing resource
  • βœ… Preserves unspecified fields
  • ⚠️ Less precise conflict detection
  • βœ… Works with older Kubernetes versions

Use when:

  • Partial updates needed
  • Compatibility with legacy systems
  • Strategic merge semantics preferred

replace (Full Replacement) ​

Completely replaces the resource.

yaml
configMaps:
  - id: config
    patchStrategy: replace
    nameTemplate: "{{ .uid }}-config"
    spec:
      # ... configmap spec

Behavior:

  • ⚠️ Replaces entire resource
  • ❌ Loses fields not in template
  • βœ… Guarantees exact match
  • βœ… Handles resourceVersion conflicts

Use when:

  • Exact resource state required
  • No other controllers manage the resource
  • Complete replacement is intentional

Warning: This removes any fields not in your template!

Default Values ​

If policies are not specified, these defaults apply:

yaml
resources:
  - id: example
    creationPolicy: WhenNeeded  # βœ… Default
    deletionPolicy: Delete      # βœ… Default
    conflictPolicy: Stuck       # βœ… Default
    patchStrategy: apply        # βœ… Default

Policy Decision Matrix ​

Recommended policy combinations by resource type:

Resource TypeCreationPolicyDeletionPolicyConflictPolicyPatchStrategy
DeploymentWhenNeededDeleteStuckapply
ServiceWhenNeededDeleteStuckapply
ConfigMapWhenNeededDeleteStuckapply
SecretWhenNeededDeleteForceapply
PVCOnceRetainStuckapply
Init JobOnceDeleteForcereplace
NamespaceWhenNeededRetainForceapply
IngressWhenNeededDeleteStuckapply

Why These Combinations? ​

Deployment, Service, ConfigMap, Ingress:

WhenNeeded + Delete + Stuck + apply
  • WhenNeeded: Spec changes should reflect in cluster immediately
  • Delete: Stateless resourcesβ€”no value keeping after node deletion
  • Stuck: Don't overwrite if another controller manages it (safety first)
  • apply: SSA preserves fields managed by HPA, admission controllers, etc.

Secret:

WhenNeeded + Delete + Force + apply
  • Force: Secrets are often pre-created by external systems (vault-agent, external-secrets). Lynq should take ownership.
  • Other policies same as Deployment for same reasons.

PVC (PersistentVolumeClaim):

Once + Retain + Stuck + apply
  • Once: PVC spec is immutable after creation (can't resize via Lynq)
  • Retain: Data is valuableβ€”never auto-delete storage
  • Stuck: If PVC already exists, investigate before proceeding
  • Risk: If you need to change storage size, delete PVC manually first

Init Job:

Once + Delete + Force + replace
  • Once: Run exactly once per node (initialization)
  • Delete: Job completedβ€”safe to remove
  • Force: Take ownership even if job was created manually
  • replace: Jobs are immutableβ€”must replace entirely

Namespace:

WhenNeeded + Retain + Force + apply
  • WhenNeeded: Labels/annotations may need updates
  • Retain: Deleting namespace cascades to ALL contentsβ€”dangerous!
  • Force: Take ownership even if pre-existing
  • Warning: Only use for tenant-specific namespaces, not shared namespaces

Policy Risk Assessment ​

Policy CombinationRisk LevelScenario
WhenNeeded + Delete + Stuck🟒 LowStandard stateless resources
WhenNeeded + Retain + Stuck🟑 MediumResources that might orphan
Once + Retain + Stuck🟒 LowStateful resources (safe)
WhenNeeded + Delete + Force🟠 HighMay overwrite other controllers
Once + Delete + ForceπŸ”΄ Very HighOne-shot with forced ownership

See Detailed Examples

For in-depth explanations with diagrams and scenarios, see Policy Combinations Examples.

Observability ​

Events ​

Policies trigger various events:

bash
# View Node events
kubectl describe lynqnode <lynqnode-name>

ConflictPolicy Event Comparison: Stuck vs Force ​

Scenario: Deployment acme-app already exists with field manager helm

Stuck Policy Events ​

bash
$ kubectl describe lynqnode acme-customer-web-app -n lynq-system

Events:
  Type     Reason            Age   From                  Message
  ----     ------            ----  ----                  -------
  Normal   Reconciling       10s   lynqnode-controller   Starting reconciliation
  Warning  ResourceConflict  8s    lynqnode-controller   Resource conflict detected for default/acme-app (Kind: Deployment, Policy: Stuck, ExistingManager: helm)
  Warning  Degraded          8s    lynqnode-controller   LynqNode degraded: 1 resource(s) in conflict

Status:
  Conditions:
    - Type: Ready
      Status: "False"
      Reason: ResourceConflict
    - Type: Degraded
      Status: "True"
      Reason: ConflictDetected
      Message: "Deployment default/acme-app managed by 'helm', not 'lynq'"
  Conflicted Resources: 1
  Ready Resources: 2
  Desired Resources: 3

Operator logs (Stuck):

2025-01-15T10:30:00Z WARN  controller.lynqnode  Conflict detected, policy=Stuck  {"lynqnode": "acme-customer-web-app", "resource": "Deployment/default/acme-app", "existingManager": "helm"}
2025-01-15T10:30:00Z INFO  controller.lynqnode  Marking node as Degraded  {"lynqnode": "acme-customer-web-app", "reason": "ConflictDetected"}

Force Policy Events ​

bash
$ kubectl describe lynqnode acme-customer-web-app -n lynq-system

Events:
  Type     Reason            Age   From                  Message
  ----     ------            ----  ----                  -------
  Normal   Reconciling       10s   lynqnode-controller   Starting reconciliation
  Warning  ForceApply        8s    lynqnode-controller   Forcing ownership of Deployment default/acme-app (previous manager: helm)
  Normal   ResourceApplied   7s    lynqnode-controller   Applied Deployment default/acme-app (forced ownership transfer)
  Normal   Ready             5s    lynqnode-controller   All resources are ready

Status:
  Conditions:
    - Type: Ready
      Status: "True"
      Reason: AllResourcesReady
  Conflicted Resources: 0  # ← Conflict resolved
  Ready Resources: 3
  Desired Resources: 3

Operator logs (Force):

2025-01-15T10:30:00Z WARN  controller.lynqnode  Conflict detected, forcing ownership  {"lynqnode": "acme-customer-web-app", "resource": "Deployment/default/acme-app", "previousManager": "helm", "newManager": "lynq"}
2025-01-15T10:30:01Z INFO  controller.lynqnode  Force apply succeeded  {"lynqnode": "acme-customer-web-app", "resource": "Deployment/default/acme-app"}

Deletion Events:

LynqNodeDeleting: Deleting LynqNode 'acme-prod-template' (template: prod-template, uid: acme)
LynqNodeDeleted: Successfully deleted LynqNode 'acme-prod-template'

Metrics ​

promql
# Count apply attempts by policy
apply_attempts_total{kind="Deployment",result="success",conflict_policy="Stuck"}

# Track conflicts
lynqnode_conflicts_total{lynqnode="acme-web",conflict_policy="Stuck"}

# Failed reconciliations
lynqnode_reconcile_duration_seconds{result="error"}

See Monitoring Guide for complete metrics reference.

Troubleshooting ​

Conflict Stuck: Step-by-Step Recovery ​

Symptom: LynqNode shows Degraded condition

bash
$ kubectl get lynqnode acme-customer-web-app -n lynq-system
NAME                        READY   DESIRED   FAILED   DEGRADED   AGE
acme-customer-web-app       2/3     3         0        true       10m

Step 1: Identify the conflicted resource

bash
# Check LynqNode status for conflict details
$ kubectl get lynqnode acme-customer-web-app -n lynq-system \
    -o jsonpath='{.status.conditions[?(@.type=="Degraded")].message}'
Deployment default/acme-app managed by 'helm', not 'lynq'

Step 2: Investigate who owns the resource

bash
# Check the field manager (owner)
$ kubectl get deployment acme-app -o yaml | grep -A10 managedFields
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    manager: helm              # ← Owned by Helm!
    operation: Apply
    time: "2025-01-10T08:00:00Z"

Step 3: Choose your resolution strategy

StrategyWhen to UseCommand
Delete conflicting resourceResource should be managed by Lynqkubectl delete deployment acme-app
Change to Force policyLynq should take ownershipEdit LynqForm: conflictPolicy: Force
Use unique nameKeep both resourcesChange nameTemplate: "{{ .uid }}-app-v2"
Remove from LynqKeep existing, don't manageRemove resource from LynqForm

Step 4: Verify resolution

bash
# After choosing a strategy, trigger reconciliation
$ kubectl annotate lynqnode acme-customer-web-app -n lynq-system \
    lynq.sh/force-reconcile=$(date +%s) --overwrite

# Verify degraded status is cleared
$ kubectl get lynqnode acme-customer-web-app -n lynq-system
NAME                        READY   DESIRED   FAILED   DEGRADED   AGE
acme-customer-web-app       3/3     3         0        false      12m

Resource Not Updating ​

Symptom: Changes to template don't apply

Cause: creationPolicy: Once is set

Diagnosis:

bash
# Check if resource has the Once annotation
$ kubectl get deployment acme-app -o jsonpath='{.metadata.annotations.lynq\.sh/created-once}'
true  # ← This resource won't be updated

Solution Options:

OptionActionRisk
Force updateDelete resource, let Lynq recreateBrief downtime
Change policyUpdate LynqForm to creationPolicy: WhenNeededFuture updates allowed
Accept behaviorKeep as-isNone (expected)
bash
# Option 1: Force recreation
$ kubectl delete deployment acme-app
# Lynq will recreate on next reconciliation

# Option 2: Change policy and remove annotation
$ kubectl patch deployment acme-app --type=json \
    -p='[{"op":"remove","path":"/metadata/annotations/lynq.sh~1created-once"}]'
# Then update LynqForm with creationPolicy: WhenNeeded

Resource Not Deleted ​

Symptom: Resource remains after LynqNode deletion

Cause: deletionPolicy: Retain is set

Diagnosis:

bash
# Check for orphan labels
$ kubectl get deployment acme-app -o jsonpath='{.metadata.labels.lynq\.sh/orphaned}'
true  # ← Orphaned by design

Solution:

bash
# Manual cleanup (if desired)
$ kubectl delete deployment acme-app

# Or find all orphaned resources
$ kubectl get all -A -l lynq.sh/orphaned=true

This is expected behavior for Retain policy.

Policy Migration Guide ​

Changing Policies on Existing Resources ​

Important

Policy changes affect future behavior, not existing resource state. Follow these migration procedures for safe transitions.

Migration: Delete β†’ Retain ​

Goal: Preserve resources that were previously set to Delete

Before migration:

yaml
# Current LynqForm
deployments:
  - id: app
    deletionPolicy: Delete  # ← Changing this

Step 1: Update the LynqForm

yaml
deployments:
  - id: app
    deletionPolicy: Retain  # ← New policy

Step 2: Trigger reconciliation to update tracking

bash
kubectl apply -f updated-lynqform.yaml

# Force reconciliation
kubectl annotate lynqnode <node-name> -n <namespace> \
    lynq.sh/force-reconcile=$(date +%s) --overwrite

Step 3: Verify the resource no longer has ownerReference

bash
$ kubectl get deployment acme-app -o jsonpath='{.metadata.ownerReferences}'
# Should be empty or null for Retain policy

TIP

The operator will automatically switch from ownerReference-based tracking to label-based tracking during reconciliation.

Migration: Retain β†’ Delete ​

Goal: Enable automatic cleanup for resources that were Retain

Warning: This will cause resources to be deleted when LynqNode is deleted!

Step 1: Verify you want automatic deletion

bash
# List all resources that will be affected
$ kubectl get all -l lynq.sh/node=<lynqnode-name>

Step 2: Update the LynqForm

yaml
deployments:
  - id: app
    deletionPolicy: Delete  # ← New policy

Step 3: Trigger reconciliation

bash
kubectl apply -f updated-lynqform.yaml
kubectl annotate lynqnode <node-name> -n <namespace> \
    lynq.sh/force-reconcile=$(date +%s) --overwrite

Step 4: Verify ownerReference is now set

bash
$ kubectl get deployment acme-app -o jsonpath='{.metadata.ownerReferences[0].name}'
acme-customer-web-app  # ← ownerReference restored

Migration: Stuck β†’ Force ​

Goal: Allow Lynq to take ownership of conflicted resources

Step 1: Identify currently conflicted resources

bash
$ kubectl get lynqnode <node-name> -o jsonpath='{.status.conditions[?(@.type=="Degraded")]}'

Step 2: Update the LynqForm

yaml
deployments:
  - id: app
    conflictPolicy: Force  # ← New policy

Step 3: Apply and monitor

bash
kubectl apply -f updated-lynqform.yaml

# Watch for ForceApply events
kubectl get events -n <namespace> --field-selector reason=ForceApply

Step 4: Verify ownership transferred

bash
$ kubectl get deployment acme-app -o yaml | grep -A5 managedFields
# Should show "manager: lynq"

Migration: Once β†’ WhenNeeded ​

Goal: Allow updates to resources that were created with Once

Step 1: Remove the Once annotation from existing resources

bash
$ kubectl get deployment acme-app -o jsonpath='{.metadata.annotations}'
# Find: "lynq.sh/created-once": "true"

$ kubectl patch deployment acme-app --type=json \
    -p='[{"op":"remove","path":"/metadata/annotations/lynq.sh~1created-once"}]'

Step 2: Update the LynqForm

yaml
deployments:
  - id: app
    creationPolicy: WhenNeeded  # ← New policy

Step 3: Apply and verify updates work

bash
kubectl apply -f updated-lynqform.yaml

# Make a template change and verify it's applied
# e.g., change image tag, then check deployment

Migration Checklist ​

Before any policy migration:

  • [ ] Backup current resource state: kubectl get <resource> -o yaml > backup.yaml
  • [ ] Identify all affected LynqNodes: kubectl get lynqnodes -l lynq.sh/template=<template-name>
  • [ ] Plan for downtime if needed (especially Delete policy changes)
  • [ ] Test in non-production environment first
  • [ ] Monitor events and operator logs during migration

See Also ​

Released under the Apache 2.0 License.
Built with ❀️ using Kubebuilder, Controller-Runtime, and VitePress.