Local Development with Minikube
Development workflow guide for contributing to and modifying Lynq.
Quick taste
If you only want to experience the operator, follow the Quick Start guide instead.
Overview
This guide covers the development workflow for making code changes to Lynq and testing them locally on Minikube.
Use this guide when you want to:
- ✅ Modify Lynq source code
- ✅ Add new features or fix bugs
- ✅ Test code changes locally before committing
- ✅ Debug the operator with breakpoints
- ✅ Iterate quickly on code changes
For initial setup: Follow the Quick Start guide first to set up your Minikube environment.
Prerequisites
Prerequisites
Complete the Quick Start guide first. You should have:
- ✅ Minikube cluster running (
lynqprofile) - ✅ cert-manager installed (automatically by setup script)
- ✅ Lynq deployed with webhooks enabled
- ✅ MySQL test database (optional, for full testing)
cert-manager Required
cert-manager is REQUIRED for all installations. The automated setup scripts install it automatically. If setting up manually, install cert-manager before deploying the operator.
Additional development tools:
- Go 1.22+
- make
- golangci-lint (optional, for linting)
- delve (optional, for debugging)
Development Workflow
Typical Development Cycle
# 1. Make code changes
vim internal/controller/lynqnode_controller.go
# 2. Run unit tests
make test
# 3. Run linter
make lint
# 4. Build and deploy to Minikube
./scripts/deploy-to-minikube.sh
# 5. View operator logs
kubectl logs -n lynq-system -l control-plane=controller-manager -f
# 6. Test changes
kubectl apply -f config/samples/
# 7. Verify results
kubectl get lynqnodes
kubectl get all -n node-<uid>
# 8. Repeat steps 1-7 as neededIteration speed
Expect roughly 1–2 minutes per build + deploy cycle when using the provided scripts.
Code Changes & Rebuilding
Quick Rebuild and Deploy
After making code changes:
# Rebuild and redeploy operator
./scripts/deploy-to-minikube.shThis script:
- Builds new Docker image with timestamp tag
- Loads image into Minikube's internal registry
- Updates operator deployment
- Waits for readiness
Why timestamp tags? Each deployment gets a unique tag, preventing Kubernetes from using cached old images.
Custom Image Tag
Use a custom tag for easier identification:
IMG=lynq:my-feature ./scripts/deploy-to-minikube.shManual Build (if needed)
# Build binary locally
make build
# Build Docker image
make docker-build IMG=lynq:dev
# Load into Minikube
minikube -p lynq image load lynq:devRunning Operator Locally (Outside Cluster)
For fastest iteration, run the operator locally on your machine while connecting to the Minikube cluster:
# Ensure CRDs are installed
make install
# Run operator locally
make runBenefits
- ✅ Instant restarts (no image build/load)
- ✅ Direct Go debugging with breakpoints
- ✅ Real-time logs in your terminal
- ✅ Fast feedback loop (~5 seconds)
Limitations
- ⚠️ Webhooks unavailable (TLS certificates require in-cluster deployment with cert-manager)
- ⚠️ No validation at admission time (changes are only validated at reconciliation)
- ⚠️ No automatic defaulting (must specify all fields manually)
- ⚠️ Runtime differs from production environment
When to use:
- Controller logic changes
- Quick iteration on reconciliation loops
- Debugging with delve
When NOT to use:
- Testing webhooks (requires full deployment with cert-manager)
- Testing validation/defaulting behavior
- Verifying in-cluster networking
- Final testing before PR
Testing with Webhooks
For complete testing including webhooks, always deploy to cluster:
./scripts/deploy-to-minikube.sh # Includes cert-manager and webhooksTesting with local run:
# Terminal 1: Run operator
make run
# Terminal 2: Apply resources
kubectl apply -f config/samples/
kubectl get lynqnodes --watch
# Terminal 3: View database changes
kubectl exec -it deployment/mysql -n lynq-test -- \
mysql -u node_reader -p nodes -e "SELECT * FROM node_configs;"Debugging
Debug with Delve
Run operator with debugger:
# Install delve if needed
go install github.com/go-delve/delve/cmd/dlv@latest
# Run with delve
dlv debug ./cmd/main.go -- --zap-devel=trueThen in delve:
(dlv) break internal/controller/lynqnode_controller.go:123
(dlv) continueDebug Operator Logs
View operator logs with different verbosity:
# Default logs
kubectl logs -n lynq-system -l control-plane=controller-manager -f
# Filter for specific node
kubectl logs -n lynq-system -l control-plane=controller-manager | grep acme-corp
# Follow logs for errors only
kubectl logs -n lynq-system -l control-plane=controller-manager -f | grep -i error
# View logs from previous crash
kubectl logs -n lynq-system -l control-plane=controller-manager --previousDebug Test Resources
View what the operator sees:
# Check LynqNode CR status
kubectl get lynqnode acme-corp-test-template -o yaml
# Check hub sync status
kubectl get lynqhub test-hub -o yaml | yq '.status'
# Check template
kubectl get lynqform test-template -o yaml
# View events
kubectl get events --sort-by='.lastTimestamp' -n lynq-system
# Describe resource for events
kubectl describe lynqnode acme-corp-test-templateTesting
Unit Tests
# Run all tests
make test
# Run with coverage
make test-coverage
# Run specific package
go test ./internal/controller/... -v
# Run specific test
go test ./internal/controller/ -run TestLynqNodeController_Reconcile -vIntegration Tests
# Requires running Minikube cluster
make test-integrationE2E Testing
Test complete workflow:
# 1. Deploy fresh operator
./scripts/deploy-to-minikube.sh
# 2. Deploy test database
./scripts/deploy-mysql.sh
# 3. Deploy test hub and template
./scripts/deploy-lynqhub.sh
./scripts/deploy-lynqform.sh
# 4. Verify nodes created
kubectl get lynqnodes
kubectl get deployments,services -l lynq.sh/node
# 5. Test lifecycle: Add node
kubectl exec -it deployment/mysql -n lynq-test -- \
mysql -u root -p nodes -e \
"INSERT INTO node_configs VALUES ('delta-co', 'https://delta.example.com', 1, 'enterprise');"
# Wait 30s, then verify
kubectl get lynqnode delta-co-test-template
# 6. Test lifecycle: Deactivate node
kubectl exec -it deployment/mysql -n lynq-test -- \
mysql -u root -p nodes -e \
"UPDATE node_configs SET is_active = 0 WHERE node_id = 'acme-corp';"
# Wait 30s, then verify deletion
kubectl get lynqnode acme-corp-test-template
# Should be NotFoundTips for Fast Iteration
1. Skip Image Build for Controller Changes
If only changing controller logic (not CRDs, RBAC, etc.):
# Run locally instead of deploying
make run~10x faster than full build/deploy cycle.
2. Keep Logs Open
# In a dedicated terminal
kubectl logs -n lynq-system -l control-plane=controller-manager -f3. Use Watch Commands
# Watch nodes
watch kubectl get lynqnodes
# Watch specific node
watch kubectl get lynqnode acme-corp-test-template -o yaml4. Quick MySQL Queries
Create aliases:
alias mysql-test='kubectl exec -it deployment/mysql -n lynq-test -- mysql -u node_reader -p$(kubectl get secret mysql-credentials -n lynq-test -o jsonpath="{.data.password}" | base64 -d) nodes'
# Then use:
mysql-test -e "SELECT * FROM node_configs;"5. Fast Context Switching
# Add to ~/.zshrc or ~/.bashrc
alias kto='kubectl config use-context lynq'
alias ktos='kubectl -n lynq-system'
alias ktot='kubectl -n lynq-test'
# Usage:
kto # Switch to lynq context
ktos get pods # Get pods in operator namespaceCommon Development Scenarios
Scenario 1: Testing Template Changes
# 1. Modify template logic in lynqnode_controller.go
vim internal/controller/lynqnode_controller.go
# 2. Run locally for quick feedback
make run
# 3. In another terminal, apply test template
kubectl apply -f config/samples/operator_v1_lynqform.yaml
# 4. Watch logs and verify rendered resources
kubectl logs -n lynq-system -l control-plane=controller-manager -f
kubectl get lynqnode -o yaml | grep -A 10 "spec:"Scenario 2: Testing Database Sync
# 1. Modify hub controller
vim internal/controller/lynqhub_controller.go
# 2. Deploy to test in-cluster
./scripts/deploy-to-minikube.sh
# 3. Change database and watch sync
mysql-test -e "UPDATE node_configs SET subscription_plan = 'premium' WHERE node_id = 'acme-corp';"
# 4. Verify LynqNode CR updated
kubectl get lynqnode acme-corp-test-template -o yaml | grep planIdScenario 3: Testing CRD Changes
# 1. Modify CRD in api/v1/
vim api/v1/lynqnode_types.go
# 2. Regenerate manifests
make manifests
# 3. Install updated CRDs
make install
# 4. Rebuild and deploy operator
./scripts/deploy-to-minikube.sh
# 5. Test with updated CRD
kubectl apply -f config/samples/Scenario 4: Testing Webhook Validation
# 1. Modify webhook in api/v1/*_webhook.go
vim api/v1/lynqform_webhook.go
# 2. Must deploy to cluster (webhooks need TLS)
./scripts/deploy-to-minikube.sh
# 3. Test invalid resource
kubectl apply -f - <<EOF
apiVersion: operator.lynq.sh/v1
kind: LynqForm
metadata:
name: invalid-template
spec:
hubId: non-existent-registry # Should fail validation
EOF
# 4. Should see validation errorCleanup
Partial Cleanup (Keep Cluster)
# Delete test resources
kubectl delete lynqnodes --all
kubectl delete lynqform test-template
kubectl delete lynqhub test-hub
# Delete MySQL
kubectl delete deployment,service,pvc mysql -n lynq-test
# Delete operator
kubectl delete deployment lynq-controller-manager -n lynq-systemFull Cleanup
# Delete everything including cluster
./scripts/cleanup-minikube.sh
# Answer 'y' to all prompts for complete cleanupFresh Start
# Complete reset
./scripts/cleanup-minikube.sh # Delete everything
./scripts/setup-minikube.sh # Recreate cluster
./scripts/deploy-to-minikube.sh # Deploy operatorTroubleshooting
Operator Won't Start
# Check pod status
kubectl get pods -n lynq-system
# Check logs
kubectl logs -n lynq-system -l control-plane=controller-manager
# Common issues:
# 1. cert-manager not ready
kubectl get pods -n cert-manager
# If pods are not running, wait or check:
kubectl describe pods -n cert-manager
# 2. Webhook certificates not ready
kubectl get certificate -n lynq-system
# Should show "Ready=True"
# 3. Image not loaded
minikube -p lynq image ls | grep lynq
# 4. CRDs not installed
kubectl get crd | grep lynqcert-manager is Critical
If the operator pod fails to start with webhook certificate errors, cert-manager is likely not installed or not ready. Check:
# Verify cert-manager installation
kubectl get pods -n cert-manager
# Check certificate status
kubectl get certificate -n lynq-system
kubectl describe certificate -n lynq-system
# If missing, install cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
kubectl wait --for=condition=Available --timeout=300s -n cert-manager deployment/cert-manager-webhookTests Failing
# Ensure test cluster is accessible
kubectl cluster-info
# Check if CRDs are installed
kubectl get crd | grep lynq
# Run tests with verbose output
go test ./... -v -count=1Image Not Updating
# Force rebuild without cache
docker build --no-cache -t lynq:dev .
# Reload into Minikube
minikube -p lynq image load lynq:dev
# Restart operator pod
kubectl rollout restart deployment -n lynq-system lynq-controller-managerAdvanced Workflows
Multiple Minikube Profiles
Work on multiple features simultaneously:
# Feature A cluster
MINIKUBE_PROFILE=feature-a ./scripts/setup-minikube.sh
MINIKUBE_PROFILE=feature-a ./scripts/deploy-to-minikube.sh
# Feature B cluster
MINIKUBE_PROFILE=feature-b ./scripts/setup-minikube.sh
MINIKUBE_PROFILE=feature-b ./scripts/deploy-to-minikube.sh
# Switch between them
kubectl config use-context feature-a
kubectl config use-context feature-bCustom Resource Allocations
# More powerful cluster for load testing
MINIKUBE_CPUS=8 \
MINIKUBE_MEMORY=16384 \
./scripts/setup-minikube.shSee Also
- Quick Start - Initial setup guide
- Development Guide - General development practices
- Contributing - Contribution guidelines
- Troubleshooting - Common issues
Summary
Fast iteration workflow:
- Make code changes
- Run
make runfor controller changes (5s feedback) - Or run
./scripts/deploy-to-minikube.shfor full testing (2min) - Test with
kubectl apply -f config/samples/ - Iterate
Key takeaways:
- Use
make runfor fastest iteration (no webhooks) - Use
./scripts/deploy-to-minikube.shfor full testing (with webhooks) - Keep logs open in a separate terminal
- Test E2E with real MySQL database
- Clean up and reset when needed
Happy coding! 🚀
