Accepted
2024-01-01 (formalized from existing design)
Lading has unusual constraints that many external crates don't satisfy:
- Performance: Must not introduce overhead that affects measurements
- Determinism: Must not introduce non-determinism (random seeds, timestamps)
- Control: Must allow fine-grained control over behavior
- Correctness: Must be provably correct for critical paths
External dependencies often don't meet these requirements, or meeting them requires extensive configuration that negates the benefit of using the dependency.
When in doubt, implement rather than import.
Before adding a dependency, consider:
-
Does it meet performance requirements?
- What's the overhead?
- Does it allocate in hot paths?
- What's the worst-case latency?
-
Is it deterministic?
- Does it use random number generation?
- Does it depend on wall-clock time?
- Does it have non-deterministic iteration order?
-
Does it give sufficient control?
- Can we configure it for our constraints?
- Can we override problematic behaviors?
- Can we test it adequately?
-
Is the functionality core to lading's purpose?
- For core functionality, we need maximum control
- For peripheral functionality, dependencies are more acceptable
Dependencies used in more than one crate must be:
- Declared in the top-level
Cargo.tomlunder[workspace.dependencies] - Referenced from workspace in sub-crates:
dependency = { workspace = true }
This ensures:
- Consistent versions across crates
- Single location for version updates
- Visibility of shared dependencies
Crate versions use XX.YY format, not XX.YY.ZZ:
# Good
tokio = "1.40"
# Avoid
tokio = "1.40.0"Implemented in-house:
- Throttling (
lading_throttle) - Core functionality, must be provably correct - Payload generation (
lading_payload) - Core functionality, must be deterministic - Signal handling (
lading_signal) - Simple, no benefit from dependency
External dependencies used:
tokio- Async runtime, well-tested, would be massive to reimplementserde- Serialization, mature ecosystem, not in hot pathproptest- Testing, not runtime codecriterion- Benchmarking, not runtime code
- Full control: Can optimize and modify as needed
- No surprise behaviors: We understand all code paths
- Determinism: Can ensure reproducibility
- Minimal overhead: Only include what we need
- Provability: Can apply Kani to our implementations
- Implementation cost: Must write and maintain more code
- Reinventing wheels: Some implementations exist elsewhere
- Bug surface: Our code may have bugs libraries don't
- Maintenance burden: Must track security issues ourselves
- Forces explicit consideration of each dependency
- Creates deeper understanding of problem domain
Is this core to lading's purpose?
├── Yes
│ └── Do we need formal verification?
│ ├── Yes → Implement (can use Kani)
│ └── No → Does any library meet ALL constraints?
│ ├── Yes → Use library
│ └── No → Implement
└── No
└── Is there a well-maintained library?
├── Yes → Use library
└── No → Implement minimal version
Use libraries for everything. Rejected because:
- Dependencies often don't meet our constraints
- Harder to ensure determinism
- Less control over performance characteristics
Implement everything from scratch. Rejected because:
- Impractical (would need to implement async runtime, etc.)
- Some dependencies are well-tested and stable
- Would delay development significantly
Fork dependencies and modify them. Used occasionally but:
- Maintenance burden of keeping forks updated
- May diverge significantly over time
- Only practical for small modifications
- Workspace
Cargo.toml- Dependency declarations lading_throttle/- Example of in-house implementationlading_payload/- Example of in-house implementation- ADR-003: Determinism Requirements (why we need control)
- ADR-005: Performance-First Design (why we need performance)