Developer Guide¶
Welcome to Olla development. This guide provides an overview of the architecture and key patterns. For detailed information, see the specific guides linked throughout.
Developed on Linux & macOS
Primary development has been done on Linux and macOS, you can develop on Windows but you may hit UAC prompts for port usage and occasional pauses at startup.
Quick Start¶
# Clone and setup
git clone https://github.com/thushan/olla.git
cd olla
make deps
# Development workflow
make dev # Build with hot-reload
make test # Run tests
make ready # Pre-commit checks
See Development Setup for detailed environment configuration.
Architecture¶
Olla follows hexagonal architecture (ports & adapters) with three distinct layers:
internal/
├── core/ # Domain layer - business logic, zero external dependencies
├── adapter/ # Infrastructure - implementations of core interfaces
└── app/ # Application layer - HTTP handlers, orchestration
Key principle: Dependencies point inward. Core has no dependencies on outer layers.
See Architecture for component deep-dive.
Core Patterns¶
Service Management¶
Services use dependency injection with topological sorting:
type ManagedService interface {
Name() string
Dependencies() []string
Start(ctx context.Context) error
Stop(ctx context.Context) error
}
Concurrency¶
Heavy use of lock-free patterns for performance:
- Atomic operations for statistics and state
- xsync library for concurrent maps and counters
- Worker pools for controlled concurrency
// Example: Lock-free stats from stats/collector.go
type Collector struct {
totalRequests *xsync.Counter
endpoints *xsync.Map[string, *endpointData]
}
Memory Optimisation¶
Object pooling reduces GC pressure:
See Technical Patterns for comprehensive pattern documentation.
Project Structure¶
.
├── cmd/ # Application entry points
├── internal/ # Private application code
│ ├── core/ # Business logic
│ ├── adapter/ # External integrations
│ └── app/ # Application layer
├── pkg/ # Public packages
├── config/ # Configuration files
├── test/ # Integration tests
└── docs/ # Documentation
Key Components¶
Proxy Engines¶
Two implementations with different trade-offs:
Engine | Description | Use Case |
---|---|---|
Sherpa | Simple, shared HTTP transport | Development, moderate load |
Olla | Per-endpoint connection pools | Production, high throughput |
See Proxy Engines for detailed comparison.
Load Balancing¶
Three strategies available:
- Priority: Routes to highest priority endpoint
- Round-Robin: Distributes requests evenly
- Least-Connections: Routes to least busy endpoint
Health Checking¶
Automatic health monitoring with circuit breakers:
- Configurable check intervals
- Exponential backoff on failures
- Circuit breaker pattern (3 failures = open)
Development Workflow¶
Code Style¶
Testing¶
See Testing Guide for testing patterns.
Common Tasks¶
Adding a New Endpoint Type¶
- Create profile in
config/profiles/
- Implement converter in
internal/adapter/converter/
- Add to profile registry
- Write tests
Modifying Proxy Behaviour¶
- Check both
sherpa
andolla
implementations - Update shared test suite in
internal/adapter/proxy/
- Run benchmarks to verify performance
Adding Statistics¶
- Update
stats.Collector
with new metrics - Use atomic operations or xsync
- Expose via status endpoints
Key Libraries¶
- puzpuzpuz/xsync: Lock-free data structures
- docker/go-units: Human-readable formatting
- Standard library: Extensive use of
context
,sync/atomic
,net/http
Performance Considerations¶
- Pre-allocate slices:
make([]T, 0, capacity)
- Use object pools: Reduce allocations in hot paths
- Atomic operations: Prefer over mutexes for counters
- Context propagation: Always pass context through call chain
Common Pitfalls¶
Context Handling¶
// Bad - new context loses request metadata
ctx := context.Background()
// Good - propagate request context
func (s *Service) Process(ctx context.Context) error
Resource Cleanup¶
// Always close response bodies
resp, err := client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
Concurrent Map Access¶
Debugging¶
Request Tracing¶
Every request gets a unique ID for tracing:
Performance Profiling¶
# CPU profile
go tool pprof http://localhost:40114/debug/pprof/profile
# Memory profile
go tool pprof http://localhost:40114/debug/pprof/heap
Getting Help¶
- Check existing tests for examples
- Review Technical Patterns for detailed patterns
- See Contributing Guide for submission process
- Ask in GitHub Issues
Next Steps¶
- Development Setup - Configure your environment
- Architecture - System design and implementation
- Technical Patterns - Deep dive into patterns
- Circuit Breaker - Resilience patterns
- Testing Guide - Testing strategies
- Contributing - Contribution process
- Benchmarking - Performance testing