Skip to main content

Level 11.25 — Interactive REPL Mode

Golden Chain Cycle: Level 11.25 Date: 2026-02-16 Status: COMPLETE — 153/153 (100%)


Key Metrics

TestDescriptionResultStatus
Test 127REPL Multi-Turn Session (direct, chain, mixed, determinism)50/50 (100%)PASS
Test 128REPL Session Statistics (per-relation, segments, cumulative)40/40 (100%)PASS
Test 129REPL Conversation Continuity (follow-up, cross-domain, bidirectional, similarity)63/63 (100%)PASS
TotalLevel 11.25153/153 (100%)PASS
Full RegressionAll 401 tests397 pass, 4 skip, 0 failPASS
CLI REPLzig build query -- --replCompiles and runsPASS

What This Means

For Users

  • Trinity now has an interactive REPL — run zig build query -- --repl and query continuously without restarting
  • Type natural commands: Paris capital_of, chain Eiffel landmark_in capital_of, list, relations, help
  • Session tracking: the REPL counts your queries and reports statistics on exit
  • Follow-up workflows: query Paris → get France, then query France → get French — all within one session
  • Type quit or press Ctrl+D to exit gracefully with session summary

For Operators

  • The REPL binary is the same zig-out/bin/trinity-query — just add --repl flag
  • All queries within a session use the same pre-built KG (built once at startup)
  • Deterministic: identical queries always produce identical results, even across sessions
  • No state leakage between queries — each query is independent within the stateless KG

For Investors

  • Level 11.25 delivers the first conversational interface for Trinity's symbolic AI
  • The REPL proves Trinity VSA can power interactive, multi-turn reasoning sessions
  • 25 development cycles: from basic ternary operations to a complete conversational symbolic AI tool
  • This is the foundation for chat interfaces, web APIs, and LLM-integrated reasoning

REPL Usage

Starting a Session

$ zig build query -- --repl
Building knowledge graph (30 entities, 5 relations, DIM=1024)...
KG ready.

Trinity REPL v1.0.0 -- Interactive Symbolic Query Session
Commands: <entity> <relation> | chain <entity> <rel1> <rel2> ...
list | relations | info | help | quit

Direct Queries

trinity> Paris capital_of
capital_of(Paris) = France (sim=0.278)

trinity> Sushi cuisine_of
cuisine_of(Sushi) = Japan (sim=0.278)

Chain Queries

trinity> chain Eiffel landmark_in capital_of
Eiffel --[landmark_in]--> Paris --[capital_of]--> France

trinity> chain Colosseum landmark_in capital_of
Colosseum --[landmark_in]--> Rome --[capital_of]--> Italy

Discovery & Help

trinity> list
Entities (30):
Cities: Paris, Tokyo, Rome, London, Cairo
Countries: France, Japan, Italy, UK, Egypt
...

trinity> relations
Relations (5):
capital_of: city -> country
landmark_in: landmark -> city
...

trinity> help
Commands:
<entity> <relation> Direct query
chain <entity> <rel1> [rel2] ... Multi-hop chain
list Show entities
relations Show relations
info Show KG info
quit Exit REPL

trinity> quit
Session ended. 4 queries executed.

Technical Details

Test 127: REPL Multi-Turn Session (50/50)

Architecture: Simulates a complete multi-turn REPL session with sequential query execution.

Four sub-tests:

Sub-testDescriptionResult
Sequential direct10 queries across all 5 relations10/10 (100%)
Sequential chains5 landmark→city→country chains (2 checks each)10/10 (100%)
Mixed sessionAlternating direct + chain queries (15 checks)15/15 (100%)
Deterministic replay15 queries re-executed for consistency15/15 (100%)

Key result: Query order has zero effect on accuracy — the KG is fully stateless.

Test 128: REPL Session Statistics (40/40)

Architecture: Tracks per-relation accuracy, session segment consistency, and cumulative milestones.

Three sub-tests:

Sub-testDescriptionResult
Per-relation accuracy5 relations × 5 pairs = 25 queries25/25 (100%)
Segment consistencyFirst 5 vs last 5 identical queries10/10 (100%)
Cumulative milestonesAll 5 relations at 100%5/5 (100%)

Key result: Every relation independently achieves 100% accuracy. Session segments produce bit-identical results.

Test 129: REPL Conversation Continuity (63/63)

Architecture: Verifies multi-step workflows where each query result feeds the next query.

Four sub-tests:

Sub-testDescriptionResult
Follow-up workflowscity→country→language (2 steps × 5 cities)10/10 (100%)
Cross-domain explorationlandmark→city→country→cuisine (3 steps × 5)15/15 (100%)
Bidirectional verificationcity→country and country→city (both ways × 5)10/10 (100%)
Similarity consistency25 threshold + 3 range checks28/28 (100%)

Sample follow-up workflows:

  • Paris → France → French (2-step)
  • Tokyo → Japan → Japanese (2-step)
  • Eiffel → Paris → France → Croissant (3-step cross-domain)
  • Pyramids → Cairo → Egypt → Falafel (3-step cross-domain)

Similarity range: min 0.266, max 0.871, spread 0.605. All well above 0.10 threshold.

Key insight: Bidirectional queries work because bipolar bind is commutative — bind(city, country) = bind(country, city), so unbinding a country from city→country memory resolves the city.


REPL Architecture

src/query_cli.zig (576 lines)
├── Entity definitions (30 names, seeds)
├── Relation definitions (5 types, pairs)
├── bipolarRandom() — same seeds as test suite
├── treeBundleN() — relation memory construction
├── findEntity() — case-insensitive name lookup
├── findRelation() — relation name lookup
├── main() — argument parsing
│ ├── --info / --list / --relations (no KG needed)
│ ├── --repl → runRepl() (continuous session)
│ ├── --chain <entity> <rel1> <rel2> ... (single chain)
│ └── <entity> <relation> (single direct query)
├── executeQuery() — single named query execution
├── executeChain() — multi-hop chain from tokens
└── runRepl() — REPL loop
├── stdin line reading (character-by-character)
├── whitespace trimming + tokenization
├── Command dispatch (query/chain/list/relations/info/help/quit)
├── Query counter tracking
└── EOF / quit graceful exit with session summary

.vibee Specifications

Three specifications created and compiled:

  1. specs/tri/repl_multi_turn_session.vibee — multi-turn session simulation
  2. specs/tri/repl_session_statistics.vibee — session statistics tracking
  3. specs/tri/repl_conversation_continuity.vibee — conversation continuity workflows

All compiled via vibeecgenerated/*.zig


Cumulative Level 11 Progress

LevelTestsDescriptionResult
11.1-11.1573-105Foundation through Massive WeightedPASS
11.17Neuro-Symbolic BenchPASS
11.18106-108Full Planning SOTAPASS
11.19109-111Real-World DemoPASS
11.20112-114Full Engine FusionPASS
11.21115-117Deployment PrototypePASS
11.22118-120User TestingPASS
11.23121-123Massive KG + CLI DispatchPASS
11.24124-126Interactive CLI BinaryPASS
11.25127-129Interactive REPL ModePASS

Total: 401 tests, 397 pass, 4 skip, 0 fail


Critical Assessment

Strengths

  1. First conversational interface — users can interact with Trinity in multi-turn sessions
  2. 100% accuracy maintained across all 153 queries including follow-up workflows, cross-domain, and bidirectional
  3. Stateless KG design — no state leakage between queries, deterministic replay verified
  4. Bidirectional queries work via commutative bind — both forward and reverse lookups resolve correctly
  5. Session tracking — query counter provides basic usage analytics

Weaknesses

  1. No session history — the REPL doesn't remember previous query results; users must re-type entities
  2. No tab completion — entity and relation names require exact typing (no autocomplete)
  3. Single KG per session — cannot load different knowledge graphs during a REPL session
  4. No output formatting options — results always in text format, no JSON/CSV export

Tech Tree Options for Next Iteration

OptionDescriptionDifficulty
A. File-Based KG LoadingLoad entities and relations from JSON/CSV file at startupMedium
B. Session History & RecallStore previous results, enable !1 to recall query #1 resultMedium
C. REST API ServerHTTP endpoint for querying the KG from web clientsHard

Conclusion

Level 11.25 delivers the first interactive REPL mode for Trinity's symbolic reasoning engine. Users can start a continuous query session, execute direct lookups and multi-hop chains, discover entities and relations, and build multi-step reasoning workflows — all within a single interactive session.

The REPL achieves 100% accuracy across 153 test queries spanning sequential sessions, session statistics, follow-up workflows, cross-domain exploration, bidirectional verification, and similarity consistency checks.

Trinity Conversational. REPL Lives. Quarks: Sessioned.