Development Guide β
Guide for developing and contributing to Lynq.
First time here?
Start with the Quick Start guide to get familiar with the system before diving into development tooling.
Setup β
Prerequisites β
| Tool | Version / Notes |
|---|---|
| Go | 1.22+ |
| kubectl | Matches target cluster |
| kind or minikube | Local cluster for testing |
| Docker | Required for image builds |
| make | Used for build/test helpers |
Clone Repository β
git clone https://github.com/k8s-lynq/lynq.git
cd lynqInstall Dependencies β
go mod downloadLocal Development β
Running Locally β
# Install CRDs
make install
# Run controller locally (uses ~/.kube/config)
make run
# Run with debug logging
LOG_LEVEL=debug make runLocal Run Limitations
make run runs the operator outside the cluster, which means:
- β οΈ Webhooks are NOT available (no TLS certificates)
- β οΈ No validation at admission time (invalid configs will only fail at reconciliation)
- β οΈ No defaulting (all fields must be specified explicitly)
For complete testing with webhooks, deploy to cluster with cert-manager:
# See Local Development with Minikube guide
./scripts/deploy-to-minikube.sh # Includes cert-manager and webhooksWhen to use make run:
- Quick iteration on controller logic
- Testing reconciliation loops
- Debugging without webhook complications
When to deploy to cluster:
- Testing webhooks (validation/defaulting)
- Final testing before committing
- Verifying production-like behavior
Testing Against Local Cluster β
# Create kind cluster
kind create cluster --name lynq-dev
# Install CRDs
make install
# Run operator
make runBuilding β
Build Binary β
# Build for current platform
make build
# Binary output: bin/manager
./bin/manager --helpBuild Container Image β
# Build image
make docker-build IMG=myregistry/lynq:dev
# Push image
make docker-push IMG=myregistry/lynq:dev
# Build multi-platform
docker buildx build --platform linux/amd64,linux/arm64 \
-t myregistry/lynq:dev \
--push .Testing β
Unit Tests β
# Run all unit tests
make test
# Run with coverage
make test-coverage
# View coverage report
go tool cover -html=cover.outIntegration Tests β
# Run integration tests (requires cluster)
make test-integrationCluster required
Integration and E2E suites create and mutate Kubernetes resources. Run them against disposable clusters.
E2E Tests β
End-to-End tests run against a real Kubernetes cluster (Kind) to validate complete scenarios.
Test Strategy
Lynq uses a 3-tier testing approach:
| Test Type | Environment | Speed | Use Case |
|---|---|---|---|
| Unit | fake client | Very Fast (seconds) | Logic validation, TDD |
| Integration | envtest | Fast (seconds-minutes) | Controller behavior |
| E2E | Kind cluster | Slower (minutes) | Real scenarios, policies |
When to use E2E tests:
- Testing actual Kubernetes behavior (ownerReferences, labels, etc.)
- Validating policy behaviors (CreationPolicy, DeletionPolicy)
- End-to-end workflows with multiple resources
- CI/CD validation before release
Prerequisites β
# Install Kind (required for E2E tests)
# macOS
brew install kind
# Linux
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.25.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# Verify installation
kind versionQuick Start β
# Run all E2E tests (creates cluster, runs tests, cleans up)
make test-e2e
# This will:
# 1. Create Kind cluster "lynq-test-e2e"
# 2. Build and load operator image
# 3. Install CRDs and deploy operator
# 4. Run all E2E tests (including policy tests)
# 5. Delete clusterRunning Specific Tests β
# Setup cluster once
make setup-test-e2e
# Run specific test suites
go test ./test/e2e/ -v -ginkgo.focus="Policy Behaviors" # Policy tests only
go test ./test/e2e/ -v -ginkgo.focus="CreationPolicy" # Creation policy only
go test ./test/e2e/ -v -ginkgo.focus="DeletionPolicy" # Deletion policy only
# Cleanup when done
make cleanup-test-e2eDevelopment Workflow (Fast Iteration) β
For rapid development cycles, reuse the cluster:
# 1. Create cluster once
make setup-test-e2e
# 2. Make code changes, then:
make docker-build IMG=example.com/lynq:v0.0.1
kind load docker-image example.com/lynq:v0.0.1 --name lynq-test-e2e
kubectl rollout restart deployment lynq-controller-manager -n lynq-system
# 3. Run tests
go test ./test/e2e/ -v -ginkgo.focus="Policy"
# 4. Repeat steps 2-3 as needed
# 5. Cleanup when done
make cleanup-test-e2eWriting BDD E2E Tests β
Policy E2E tests use Ginkgo BDD framework with Given-When-Then pattern:
// test/e2e/policy_e2e_test.go
var _ = Describe("Policy Behaviors", func() {
Context("CreationPolicy", func() {
It("should create resource only once with Once policy", func() {
// Given: A LynqNode with CreationPolicy=Once
By("creating LynqNode with Once policy")
createLynqNode(nodeYAML)
// When: ConfigMap is created
By("verifying ConfigMap has created-once annotation")
Eventually(func() string {
return getAnnotation(cmName, "lynq.sh/created-once")
}, timeout).Should(Equal("true"))
// And: Update spec to change data
By("updating LynqNode spec")
updateLynqNode(updatedYAML)
// Then: ConfigMap should NOT be updated
By("verifying ConfigMap data remains unchanged")
Consistently(func() string {
return getConfigMapData(cmName, "key")
}, "30s", "5s").Should(Equal("initial-value"))
})
})
})Key patterns:
- Use
By()for clear test steps (Given-When-Then) - Use
Eventually()for async operations (resource creation, updates) - Use
Consistently()to verify state doesn't change - Use
BeforeEach/AfterEachfor setup/cleanup
Troubleshooting E2E Tests β
# Test timeout - check pod status
kubectl get pods -n lynq-system
kubectl logs -n lynq-system deployment/lynq-controller-manager
# Cluster stuck - force cleanup
kind delete cluster --name lynq-test-e2e
# Image not loaded - verify
docker exec -it lynq-test-e2e-control-plane crictl images | grep lynq
# Namespace stuck - remove finalizers
kubectl patch ns policy-test -p '{"metadata":{"finalizers":[]}}' --type=mergeCI/CD Integration β
E2E tests run automatically in GitHub Actions:
# .github/workflows/test-e2e.yml
# Triggers on: push, pull_request
# Runs: make test-e2eTests run in CI on:
- Every push to main/master
- Every pull request
- Manual workflow dispatch
Code Quality β
Linting β
# Run linter
make lint
# Auto-fix issues
golangci-lint run --fixFormatting β
# Format code
go fmt ./...
# Or use goimports
goimports -w .Generate Code β
# Generate CRD manifests, RBAC, etc.
make generate
# Generate DeepCopy methods
make manifestsProject Structure β
lynq/
βββ api/v1/ # CRD types
β βββ lynqnode_types.go
β βββ lynqhub_types.go
β βββ lynqform_types.go
β βββ common_types.go
βββ internal/controller/ # Controllers
β βββ lynqnode_controller.go
β βββ lynqhub_controller.go
β βββ lynqform_controller.go
βββ internal/apply/ # SSA apply engine
βββ internal/database/ # Database connectors
βββ internal/graph/ # Dependency graph
βββ internal/readiness/ # Readiness checks
βββ internal/template/ # Template engine
βββ internal/metrics/ # Prometheus metrics
βββ config/ # Kustomize configs
β βββ crd/ # CRD manifests
β βββ rbac/ # RBAC configs
β βββ manager/ # Deployment configs
β βββ samples/ # Example CRs
βββ test/ # Tests
β βββ e2e/ # E2E tests
β βββ utils/ # Test utilities
βββ docs/ # Documentation
βββ cmd/ # Entry pointAdding Features β
New CRD Field β
- Update API types:
// api/v1/lynqnode_types.go
type LynqNodeSpec struct {
NewField string `json:"newField,omitempty"`
}- Generate code:
make generate
make manifestsUpdate controller logic
Add tests
Update documentation
New Controller β
- Create controller file:
// internal/controller/myresource_controller.go
package controller
type MyResourceReconciler struct {
client.Client
Scheme *runtime.Scheme
}
func (r *MyResourceReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
// Implementation
}- Register controller:
// cmd/main.go
if err = (&controller.MyResourceReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
// Handle error
}- Add tests
Adding a New Datasource β
Lynq uses a pluggable adapter pattern for datasources, making it easy to add support for new databases or data sources.
Architecture β
Quick Reference β
1. Implement Interface (internal/datasource/your_adapter.go):
package datasource
type YourAdapter struct {
conn *YourConnection
}
// QueryNodes retrieves node data
func (a *YourAdapter) QueryNodes(ctx context.Context, config QueryConfig) ([]NodeRow, error) {
// 1. Build query using config.Table, config.ValueMappings, config.ExtraMappings
// 2. Execute query
// 3. Map results to []NodeRow
// 4. Filter active nodes
return nodes, nil
}
// Close cleans up resources
func (a *YourAdapter) Close() error {
return a.conn.Close()
}2. Register in Factory (internal/datasource/interface.go):
const SourceTypeYours SourceType = "yourdatasource"
func NewDatasource(sourceType SourceType, config Config) (Datasource, error) {
switch sourceType {
case SourceTypeYours:
return NewYourAdapter(config)
// ... other cases
}
}3. Add API Types (api/v1/lynqhub_types.go):
const SourceTypeYours SourceType = "yourdatasource"
type LynqHubSourceSpec struct {
// +kubebuilder:validation:Enum=mysql;postgresql;yourdatasource
Type SourceType `json:"type"`
YourDatasource *YourDatasourceSpec `json:"yourdatasource,omitempty"`
}4. Test:
make test
make lint
make buildFull Guide β
π Detailed Step-by-Step Guide: Contributing a New Datasource
The full guide includes:
- Interface explanation with examples
- Complete MySQL reference implementation walkthrough
- PostgreSQL adapter example
- Testing strategies
- Documentation templates
- PR checklist
Key Files β
| File | Purpose |
|---|---|
internal/datasource/interface.go | Interface definition + factory |
internal/datasource/mysql.go | Reference implementation |
internal/datasource/your_adapter.go | Your implementation |
api/v1/lynqhub_types.go | API types |
internal/controller/lynqhub_controller.go | Controller integration |
Example: Study MySQL Adapter β
The MySQL adapter (internal/datasource/mysql.go) is a complete, production-ready reference:
# View the implementation
cat internal/datasource/mysql.go
# Key sections:
# - NewMySQLAdapter(): Connection setup
# - QueryNodes(): Query + mapping + filtering
# - Close(): Resource cleanup
# - Helper functions: joinColumns(), isActive()What to learn:
- Connection pooling configuration
- Query building with column mappings
- Result scanning and type handling
- Filtering logic (active nodes only)
- Error handling patterns
Development Workflow β
# 1. Create adapter file
touch internal/datasource/postgres.go
# 2. Implement interface
# (Copy mysql.go as template)
# 3. Register in factory
vim internal/datasource/interface.go
# 4. Add API types
vim api/v1/lynqhub_types.go
# 5. Generate manifests
make manifests
# 6. Write tests
touch internal/datasource/postgres_test.go
# 7. Test
make test
# 8. Lint
make lint
# 9. Build
make build
# 10. Test locally
make install
make run
kubectl apply -f config/samples/postgres/Common Patterns β
SQL-based datasources (MySQL, PostgreSQL):
- Use
database/sqlpackage - Build SELECT queries dynamically
- Use parameterized queries for safety
- Handle NULL values with
sql.NullString
NoSQL datasources (MongoDB, DynamoDB):
- Use native client libraries
- Map documents/items to
NodeRow - Handle different query syntax
- Consider pagination for large datasets
REST APIs:
- Use
net/httpclient - Unmarshal JSON to structs
- Map to
NodeRow - Handle authentication
Tips β
- Start with MySQL adapter - Copy it as a template
- Focus on QueryNodes() - This is the core logic
- Handle errors gracefully - Return clear error messages
- Filter consistently - Use the same
isActive()logic - Test thoroughly - Unit tests + integration tests
- Document well - Help users configure your datasource
Contributing β
Contribution checklist
Always include tests, update docs, and run make lint before opening a pull request.
Workflow β
- Fork repository
- Create feature branch
- Make changes
- Add tests
- Run linter:
make lint - Run tests:
make test - Commit with conventional commits
- Open Pull Request
Conventional Commits β
feat: add new feature
fix: fix bug
docs: update documentation
test: add tests
refactor: refactor code
chore: maintenance tasksPull Request Template β
## Description
Brief description of changes
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests added/updated
- [ ] Manual testing performed
## Checklist
- [ ] Code follows style guidelines
- [ ] Self-review completed
- [ ] Documentation updated
- [ ] Tests passingRelease Process β
Release automation
Tags trigger the release pipeline. Confirm CI is green before pushing a new tag.
Version Bump β
Update version in:
README.mdconfig/manager/kustomization.yaml
Generate changelog
Create git tag:
git tag -a v1.1.0 -m "Release v1.1.0"
git push origin v1.1.0- GitHub Actions builds and publishes release
Useful Commands β
# Install CRDs
make install
# Uninstall CRDs
make uninstall
# Deploy operator
make deploy IMG=<image>
# Undeploy operator
make undeploy
# Run locally
make run
# Build binary
make build
# Build container
make docker-build IMG=<image>
# Run tests
make test
# Run linter
make lint
# Generate code
make generate manifests