Releases: snath-ai/Lar-JEPA
v0.2.0 — The Nervous System: Universal Model Routing for LLMs, JEPAs, and Every Cognitive Architecture That Follows.
v0.2.0 — The Nervous System
Universal Model Routing for LLMs, JEPAs, and Every Cognitive Architecture That Follows
Released: 18 April 2026
Author: Aadithya Vishnu Sajeev / Snath AI
Repository: snath-ai/Lar-JEPA
Prior Art Preprint: DMN v3.0 "The Dream Loop" (Zenodo, April 2026)
What Changed
This release reframes and extends the Lár-JEPA repository from a JEPA-specific
orchestration showcase into its correct architectural identity: a universal
cognitive routing nervous system capable of routing any model type —
large language models (LLMs), JEPA world models, diffusion models, state-space
models (SSMs), graph neural networks (GNNs), and any architecture that follows
— as first-class, equally routable nodes within the same deterministic Lár
execution spine.
New: AbstractCognitiveNode — The Universal Node Interface
File: core/interfaces.py
Introduced AbstractCognitiveNode as the root interface for all cognitive
components in the Lár routing graph. Any model type implements three methods:
def encode(self, input_signal: Any) -> Any # Signal → internal context
def forward(self, context: Any) -> Any # Core inference pass
def decode(self, representation: Any) -> Any # Output → GraphState signalAbstractManifold (the JEPA-specific world model base) is now a subclass of
AbstractCognitiveNode, placing JEPAs and LLMs at the same level of the
type hierarchy. AbstractEntropicRouter is retained as the JEPA-specific
routing spine component.
New: AbstractContextBridge — Cross-Modal Signal Adapter
File: core/interfaces.py
Introduced AbstractContextBridge: a stateless signal conduit that adapts one
AbstractCognitiveNode's output SignalType to another node's expected
encode() input. This enables:
- LLMs attending to JEPA latent predictions — bridge converts
LATENT_EMBEDDING→TEXTor prefix embedding for LLM context window. - JEPAs conditioning on LLM semantic embeddings — bridge converts
TEXT/GRAPH_STATE→ manifold-compatible context vector. - Any future cross-modal composition — new bridge declares
source_signal_typeandtarget_signal_type, implementsbridge().
Bridges hold no model weights. They are pure format adapters — the synaptic
connectors of the nervous system.
New: ModelType, SignalType, CompositionPattern Enums
File: core/types.py
ModelType
All model types are first-class citizens in the Lár routing graph:
| Value | Description |
|---|---|
LLM |
Large Language Model (any provider via LiteLLM) |
JEPA |
Joint-Embedding Predictive Architecture |
DIFFUSION |
Diffusion / score-matching model |
SSM |
State Space Model (Mamba, S4, RWKV, etc.) |
GNN |
Graph Neural Network |
CLASSICAL |
Deterministic non-neural function |
HYBRID |
Heterogeneous composite |
FUTURE |
Any architecture not yet named or invented |
The FUTURE value is a formal architectural claim: any model type that does
not yet exist will implement AbstractCognitiveNode and become routable
without modifying the spine.
SignalType
Typed signal vocabulary for cross-node communication:
TEXT, LATENT_EMBEDDING, LGSL_SPEC, GRAPH_STATE, STRUCTURED_DATA,
TENSOR, IMAGE, AUDIO, GRAPH, DISTRIBUTION.
CompositionPattern
Eight named cross-modal composition design patterns:
LLM_ROUTES_JEPA, JEPA_INFORMS_LLM, PARALLEL_HOMOGENEOUS,
PARALLEL_HETEROGENEOUS, HIERARCHICAL, CROSS_ATTENTION,
SEQUENTIAL_PIPELINE, RECURSIVE_SELF_IMPROVEMENT.
New: JEPA_DMN_Consolidation_Node — Live JEPA→DMN Bridge
File: dmn_integration/consolidation_node.py
The consolidation stub has been replaced with a functional bridge connecting
the Lár-JEPA execution layer to the Default Mode Network (DMN) episodic memory
store (Hippocampus / ChromaDB).
write_trajectory_heuristic(trajectory_log, embedding)
- Called after
AbstractEntropicRouterreturnsCOMMIT_TRAJECTORY - Converts the committed JEPA trajectory to a text summary + metadata record
- Writes to ChromaDB
long_term_memorycollection withmemory_type: episodic
(DMN v3.0 memory typing for the Consolidation Loop) - Also writes to the DMN JSON journal for narrative history
recall_heuristics(query, max_results)
- Retrieves semantically relevant past JEPA heuristics to warm the current
planning cycle - Calls
Hippocampus.recall()via ChromaDB vector similarity search
Both operations degrade gracefully if the DMN Hippocampus is unavailable.
The JEPA execution spine is never blocked by DMN availability.
This completes the first pass of the Consolidation Loop:
JEPA prediction
→ EntropicRouter: COMMIT_TRAJECTORY
→ JEPA_DMN_Consolidation_Node.write_trajectory_heuristic()
→ Hippocampus.save_memory()
→ ChromaDB long_term_memory + dreams.json
→ (idle cycle) Dreamer consolidates → semantic memory
→ (future) BrainNode LoRA trains on confirmed routing decisions
New: ARCHITECTURE.md — The Nervous System Design Document
File: ARCHITECTURE.md (top-level)
Formal specification of the Lár-JEPA framework as a universal cognitive
routing nervous system. Documents:
- The three-layer architecture (Routing Spine / Signal Bridge / Cognitive Nodes)
- All
ModelTypevalues with example architectures and output signal types - Six named composition patterns with ASCII dataflow diagrams
BatchNodeconcurrency for homogeneous and heterogeneous model ensembles- Forward compatibility guarantee via
ModelType.FUTURE - The complete prior art chain (5 DOIs)
Updated: core/__init__.py
All new types and interfaces are now exported from the core package:
from core import (
AbstractCognitiveNode, AbstractManifold,
AbstractContextBridge, AbstractEntropicRouter,
RouteDecision, ModelType, SignalType,
CompositionPattern, StructuralImpasseError,
)Updated: README.md — Reframed
| Before | After |
|---|---|
| Title: "Orchestrating World Models (A Post-LLM Architecture)" | "The Universal Model Routing Nervous System" |
| Section: "The Post-LLM Paradigm (JEPA)" | "The Universal Routing Problem" |
| Framing: JEPA replaces LLMs | Framing: Lár routes LLMs AND JEPAs and everything after |
| Footer: "Conceptual Testbed and Showcase" | Accurate description of implemented components |
Prior Art Status
The architectural mechanisms introduced in this release are documented as
prospective prior art in:
Sajeev, A. V. (2026). DMN v3.0 — The Dream Loop: Memory Consolidation,
Familiarity-Weighted Retrieval, and the Learned Graph Executor in the Lár
Default Mode Network. Zenodo. (DOI pending publication 18 April 2026.)
The AbstractCognitiveNode universal routing interface, AbstractContextBridge
cross-modal composition layer, ModelType.FUTURE forward-compatibility
placeholder, and the six named CompositionPattern values were all designed
and committed by Aadithya Vishnu Sajeev under Snath AI prior to any employment,
corporate collaboration, or externally funded research. This release constitutes
the software implementation record complementing the above preprint.
Summary
The industry is building the Brain (LLMs, JEPAs).
We are building the Nervous System (Lár + DMN).
v0.2.0 is the release where that sentence became architecturally precise.