Architecture
Architecture
Overview
The system is implemented in two layers:
src/decision_engine/- Intelligent-first core primitives (semantic encoding, learned importance, learned decay, learned ranking, storage manager).
src/memory_engine/- v1-compatible stage modules and orchestrator API:
- Stage 1: input processing (
stage1_input) - Stage 2: decision logic (
stage2_decision) - Stage 3: learning loop (
stage3_learning) - Storage and retrieval (
storage)
- Stage 1: input processing (
- v1-compatible stage modules and orchestrator API:
Data Flow
- Event ingestion:
memory_engine.engine.DecisionEngine.process_inputvalidates event schema and semantically encodes content.
- Decisioning:
make_storage_decisionuses learned importance score with cold-start prior blend.- Learned decay rate and half-life metadata are attached.
- Storage:
store_memorypersists to SQLite and vector index abstraction.- Compression planner collapses repetitive clusters into a compressed memory record.
- Retrieval:
- Vector preselection via optional FAISS backend or numpy fallback.
- Learned ranker reorders candidates.
- Learning:
- Outcome feedback updates importance model, ranker, and decay learner continuously.
Key Design Decisions
- Learned-first behavior with controlled cold start:
- Decisions are driven by trainable models.
- Deterministic formula prior is retained as a bounded bootstrap signal.
- Provider abstraction for semantics and embeddings:
- Deterministic providers enable repeatable local tests.
- OpenAI-backed providers are available for production semantics.
- Compression as explicit storage optimization:
- Repetitive memory clusters are compacted.
- Compression metadata is tracked (
is_compressed,original_count).