Development Guide
Guide for developing and contributing to Lynq.
First time here?
Start with the Quick Start guide to get familiar with the system before diving into development tooling.
Setup
Prerequisites
| Tool | Version / Notes |
|---|---|
| Go | 1.22+ |
| kubectl | Matches target cluster |
| kind or minikube | Local cluster for testing |
| Docker | Required for image builds |
| make | Used for build/test helpers |
Clone Repository
git clone https://github.com/k8s-lynq/lynq.git
cd lynqInstall Dependencies
go mod downloadLocal Development
Running Locally
# Install CRDs
make install
# Run controller locally (uses ~/.kube/config)
make run
# Run with debug logging
LOG_LEVEL=debug make runLocal Run Limitations
make run runs the operator outside the cluster, which means:
- ⚠️ Webhooks are NOT available (no TLS certificates)
- ⚠️ No validation at admission time (invalid configs will only fail at reconciliation)
- ⚠️ No defaulting (all fields must be specified explicitly)
For complete testing with webhooks, deploy to cluster with cert-manager:
# See Local Development with Minikube guide
./scripts/deploy-to-minikube.sh # Includes cert-manager and webhooksWhen to use make run:
- Quick iteration on controller logic
- Testing reconciliation loops
- Debugging without webhook complications
When to deploy to cluster:
- Testing webhooks (validation/defaulting)
- Final testing before committing
- Verifying production-like behavior
Testing Against Local Cluster
# Create kind cluster
kind create cluster --name lynq-dev
# Install CRDs
make install
# Run operator
make runBuilding
Build Binary
# Build for current platform
make build
# Binary output: bin/manager
./bin/manager --helpBuild Container Image
# Build image
make docker-build IMG=myregistry/lynq:dev
# Push image
make docker-push IMG=myregistry/lynq:dev
# Build multi-platform
docker buildx build --platform linux/amd64,linux/arm64 \
-t myregistry/lynq:dev \
--push .Testing
Unit Tests
# Run all unit tests
make test
# Run with coverage
make test-coverage
# View coverage report
go tool cover -html=cover.outIntegration Tests
# Run integration tests (requires cluster)
make test-integrationCluster required
Integration and E2E suites create and mutate Kubernetes resources. Run them against disposable clusters.
E2E Tests
# Create test cluster
kind create cluster --name e2e-test
# Run E2E tests
make test-e2e
# Cleanup
kind delete cluster --name e2e-testCode Quality
Linting
# Run linter
make lint
# Auto-fix issues
golangci-lint run --fixFormatting
# Format code
go fmt ./...
# Or use goimports
goimports -w .Generate Code
# Generate CRD manifests, RBAC, etc.
make generate
# Generate DeepCopy methods
make manifestsProject Structure
lynq/
├── api/v1/ # CRD types
│ ├── lynqnode_types.go
│ ├── lynqhub_types.go
│ ├── lynqform_types.go
│ └── common_types.go
├── internal/controller/ # Controllers
│ ├── lynqnode_controller.go
│ ├── lynqhub_controller.go
│ └── lynqform_controller.go
├── internal/apply/ # SSA apply engine
├── internal/database/ # Database connectors
├── internal/graph/ # Dependency graph
├── internal/readiness/ # Readiness checks
├── internal/template/ # Template engine
├── internal/metrics/ # Prometheus metrics
├── config/ # Kustomize configs
│ ├── crd/ # CRD manifests
│ ├── rbac/ # RBAC configs
│ ├── manager/ # Deployment configs
│ └── samples/ # Example CRs
├── test/ # Tests
│ ├── e2e/ # E2E tests
│ └── utils/ # Test utilities
├── docs/ # Documentation
└── cmd/ # Entry pointAdding Features
New CRD Field
- Update API types:
// api/v1/lynqnode_types.go
type LynqNodeSpec struct {
NewField string `json:"newField,omitempty"`
}- Generate code:
make generate
make manifestsUpdate controller logic
Add tests
Update documentation
New Controller
- Create controller file:
// internal/controller/myresource_controller.go
package controller
type MyResourceReconciler struct {
client.Client
Scheme *runtime.Scheme
}
func (r *MyResourceReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
// Implementation
}- Register controller:
// cmd/main.go
if err = (&controller.MyResourceReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
// Handle error
}- Add tests
Adding a New Datasource
Lynq uses a pluggable adapter pattern for datasources, making it easy to add support for new databases or data sources.
Architecture
Quick Reference
1. Implement Interface (internal/datasource/your_adapter.go):
package datasource
type YourAdapter struct {
conn *YourConnection
}
// QueryNodes retrieves node data
func (a *YourAdapter) QueryNodes(ctx context.Context, config QueryConfig) ([]NodeRow, error) {
// 1. Build query using config.Table, config.ValueMappings, config.ExtraMappings
// 2. Execute query
// 3. Map results to []NodeRow
// 4. Filter active nodes
return nodes, nil
}
// Close cleans up resources
func (a *YourAdapter) Close() error {
return a.conn.Close()
}2. Register in Factory (internal/datasource/interface.go):
const SourceTypeYours SourceType = "yourdatasource"
func NewDatasource(sourceType SourceType, config Config) (Datasource, error) {
switch sourceType {
case SourceTypeYours:
return NewYourAdapter(config)
// ... other cases
}
}3. Add API Types (api/v1/lynqhub_types.go):
const SourceTypeYours SourceType = "yourdatasource"
type LynqHubSourceSpec struct {
// +kubebuilder:validation:Enum=mysql;postgresql;yourdatasource
Type SourceType `json:"type"`
YourDatasource *YourDatasourceSpec `json:"yourdatasource,omitempty"`
}4. Test:
make test
make lint
make buildFull Guide
📚 Detailed Step-by-Step Guide: Contributing a New Datasource
The full guide includes:
- Interface explanation with examples
- Complete MySQL reference implementation walkthrough
- PostgreSQL adapter example
- Testing strategies
- Documentation templates
- PR checklist
Key Files
| File | Purpose |
|---|---|
internal/datasource/interface.go | Interface definition + factory |
internal/datasource/mysql.go | Reference implementation |
internal/datasource/your_adapter.go | Your implementation |
api/v1/lynqhub_types.go | API types |
internal/controller/lynqhub_controller.go | Controller integration |
Example: Study MySQL Adapter
The MySQL adapter (internal/datasource/mysql.go) is a complete, production-ready reference:
# View the implementation
cat internal/datasource/mysql.go
# Key sections:
# - NewMySQLAdapter(): Connection setup
# - QueryNodes(): Query + mapping + filtering
# - Close(): Resource cleanup
# - Helper functions: joinColumns(), isActive()What to learn:
- Connection pooling configuration
- Query building with column mappings
- Result scanning and type handling
- Filtering logic (active nodes only)
- Error handling patterns
Development Workflow
# 1. Create adapter file
touch internal/datasource/postgres.go
# 2. Implement interface
# (Copy mysql.go as template)
# 3. Register in factory
vim internal/datasource/interface.go
# 4. Add API types
vim api/v1/lynqhub_types.go
# 5. Generate manifests
make manifests
# 6. Write tests
touch internal/datasource/postgres_test.go
# 7. Test
make test
# 8. Lint
make lint
# 9. Build
make build
# 10. Test locally
make install
make run
kubectl apply -f config/samples/postgres/Common Patterns
SQL-based datasources (MySQL, PostgreSQL):
- Use
database/sqlpackage - Build SELECT queries dynamically
- Use parameterized queries for safety
- Handle NULL values with
sql.NullString
NoSQL datasources (MongoDB, DynamoDB):
- Use native client libraries
- Map documents/items to
NodeRow - Handle different query syntax
- Consider pagination for large datasets
REST APIs:
- Use
net/httpclient - Unmarshal JSON to structs
- Map to
NodeRow - Handle authentication
Tips
- Start with MySQL adapter - Copy it as a template
- Focus on QueryNodes() - This is the core logic
- Handle errors gracefully - Return clear error messages
- Filter consistently - Use the same
isActive()logic - Test thoroughly - Unit tests + integration tests
- Document well - Help users configure your datasource
Contributing
Contribution checklist
Always include tests, update docs, and run make lint before opening a pull request.
Workflow
- Fork repository
- Create feature branch
- Make changes
- Add tests
- Run linter:
make lint - Run tests:
make test - Commit with conventional commits
- Open Pull Request
Conventional Commits
feat: add new feature
fix: fix bug
docs: update documentation
test: add tests
refactor: refactor code
chore: maintenance tasksPull Request Template
## Description
Brief description of changes
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests added/updated
- [ ] Manual testing performed
## Checklist
- [ ] Code follows style guidelines
- [ ] Self-review completed
- [ ] Documentation updated
- [ ] Tests passingRelease Process
Release automation
Tags trigger the release pipeline. Confirm CI is green before pushing a new tag.
Version Bump
Update version in:
README.mdconfig/manager/kustomization.yaml
Generate changelog
Create git tag:
git tag -a v1.1.0 -m "Release v1.1.0"
git push origin v1.1.0- GitHub Actions builds and publishes release
Useful Commands
# Install CRDs
make install
# Uninstall CRDs
make uninstall
# Deploy operator
make deploy IMG=<image>
# Undeploy operator
make undeploy
# Run locally
make run
# Build binary
make build
# Build container
make docker-build IMG=<image>
# Run tests
make test
# Run linter
make lint
# Generate code
make generate manifests