Our Approach
Every primitive we catalog — whether a data structure, an operational strategy, or a coordination pattern — goes through a rigorous evaluation before we claim it deserves a place alongside conventional approaches.
Three Tiers of Primitives
In biology, you cannot separate structure from metabolism from ecology. A cell that does not metabolize is dead. An organism that does not interact with its ecosystem is extinct. The same is true in computing.
Organics
Data Structures
Biological counterparts to classical data structures. Same problems, different topology, one inspired by biology rather than mechanical metaphors. Nacre Array alongside Vec. Diatom Bitmap alongside roaring bitmaps.
Metabolics
Operational Strategies
Strategies drawn from how organisms manage energy, resources, and lifecycle. A metabolic doesn’t compete with a data structure. It governs how computational resources are allocated, conserved, and recovered.
Ecologics
Coordination Patterns (Coming Soon)
Nature-inspired patterns for how multiple systems interact, compete, and co-evolve. Ecologics govern relationships between systems: symbiosis, succession, trophic cascades.
Organic Evaluation: 7 Phases
Every proposed organic must pass through all seven phases. Each phase ends with a gate question — a binary test that the organic must satisfy before advancing. If it fails a gate, we either fix the design or abandon it.
Nature Analog Identification
Identify the natural phenomenon and map its biological properties to computational properties.
Gate: Does the nature analog provide real computational insight, or is it just a naming exercise?
Formal Specification
Write a complete spec defining the data model, operations, complexity bounds, and invariants.
Gate: Is the spec precise enough to implement from?
Complexity Analysis
Analyze theoretical complexity for all operations and compare against the inorganic counterpart.
Gate: Does the organic outperform the inorganic in at least one operation class?
Implementation
Implement the organic in Rust with full test coverage for all operations and invariants.
Gate: Does the implementation compile, pass tests, and match the spec?
Benchmarking
Benchmark against the inorganic counterpart across multiple workload types and sizes.
Gate: Does the organic meet or exceed the inorganic on the workloads where it claims superiority?
WASM + TypeScript Integration
Add WebAssembly bindings and create TypeScript wrappers so the organic runs in the browser.
Gate: Is the WASM overhead acceptable for the target use cases?
Publication
Update the site catalog, publish benchmark results, write papers and blog posts.
Gate: Is the documentation complete and accurate?
What We Measure
Classical analysis gives you O(n). We go further. Every primitive is evaluated across complexity dimensions that capture properties conventional approaches don’t even track.
O(e) Efficiency
Classical algorithmic complexity. Time and space bounds for every operation, compared directly against the inorganic counterpart.
O(a) Adaptiveness
How well the structure responds to changing workloads over time. Does it self-tune, or does performance degrade as access patterns shift?
O(r) Resilience
Graceful degradation under adversarial or edge-case inputs. When pushed beyond design parameters, does it fail catastrophically or degrade smoothly?
O(τ) Thermal Cost
The energy and memory overhead of the organic's self-management. Adaptation isn't free. We measure what it costs and whether the trade-off is worth it.
Metabolic Evaluation: 5 Phases
Metabolics aren’t data structures, so their evaluation criteria differ. Instead of benchmarking against a classical counterpart, metabolics are evaluated on their resource model, implementation feasibility, and real-world impact.
Dormancy/Energy Survey
Catalog biological strategies for the specific resource constraint. Require 2+ convergent analogs from different phyla.
Gate: Do multiple independent lineages converge on this strategy?
Resource Model Extraction
For each analog, extract the trigger condition, depth spectrum, wake latency, maintenance cost, and recovery debt.
Gate: Is the resource model complete and measurable?
Hardware/Infrastructure Filter
Can this strategy be implemented with standard OS primitives, container orchestration, or application-level state management?
Gate: Does it work without kernel modifications or custom hardware?
Policy Composition
Define default policy, tuning surface, interaction with organics, and failure modes. Fewer knobs is better.
Gate: When the strategy fails, does the system degrade to always-awake (safe) or stuck-asleep (dangerous)?
System Validation
Map to real systems. Which juggernauts implement ad-hoc versions? What does the metabolic simplify? What are the energy savings?
Gate: Does the metabolic measurably improve resource efficiency in at least one production scenario?
Benchmark Philosophy
We publish everything. Wins, losses, regressions, dead ends. If a benchmark shows the organic is slower than its inorganic counterpart on a particular workload, that result ships alongside the ones where it wins. Research that hides failures is marketing, not science.
Our benchmark provenance operates at two levels. At the granular level, every benchmark run lives in the git history alongside the Criterion HTML reports — anyone can reproduce our results. At the narrative level, we capture key milestones, breakthroughs, and setbacks in analysis documents that inform our published content.
All benchmark code is open source. The results are real, the methodology is transparent, and the commit history tells the full story — including the parts where we got it wrong before we got it right.
AI-Assisted Research
Mutuus is a small team with big ideas. We use AI tools extensively, and we’re transparent about where they help and where they don’t.
AI assists with literature review, exploring biological analogs, formalizing intuitions into academic language, and drafting written content. When the team has a strong engineering intuition about a biological metaphor, AI helps translate that intuition into the vocabulary that the research community expects. It’s a communication tool, not a thinking tool.
What AI does not do: write Rust implementations, generate benchmark data, make gate decisions, or produce test results. Every crate compiles. Every test passes on real hardware. Every benchmark number comes from Criterion runs against actual code. The artifacts are human-built and machine-verified — by compilers and test suites, not by language models.
We believe the meaningful question isn’t “did you use AI?” but “can someone reproduce the thinking?” We publish our research, our formal specs, our interfaces, and our WASM bindings. Anyone can use our primitives — and anyone can read our reasoning and implement their own. A second brain on the same problem might yield something even better. That’s the accountability that matters.
What Comes Out
Each primitive produces a set of deliverables as it progresses through evaluation. No two primitives follow the same rollout pattern.
White Papers
Formal specifications with complexity proofs
Blog Posts
Accessible explanations of design and results
Benchmarks
Reproducible performance comparisons
WASM Bindings
Try primitives directly in the browser
Simulations
Interactive visualizations of primitive behavior
Case Studies
Real-world applications in production systems
Explore
See the approach in action. Read the published research, browse the primitives library, or follow development on the blog.