Skip to main content

Why We Gave Our AI Agent Orbital Memory Instead of a Vector Database

· 6 min read
Osamah Alghanmi
Co-Founder & Technical Lead

Why We Gave Our AI Agent Orbital Memory Instead of a Vector Database

Everyone's building RAG systems with vector DBs. We gave our AI a structured memory system that actually understands context.

Entity(Matter)Page(Space)Trait(Energy)idleactivehas_traitrenderstransition
Orbital Unit = Entity + Traits + Pages

The RAG Problem

Retrieval-Augmented Generation (RAG) is the standard approach for giving AI agents memory:

  1. Take user query: "How did I handle auth last time?"
  2. Generate embedding vector
  3. Search vector database for similar vectors
  4. Inject top-K results into prompt
  5. Generate response

The problem? Vector similarity ≠ contextual relevance.

When RAG Fails

Scenario 1: Temporal Context

  • User: "What did I work on last Tuesday?"
  • Vector DB: Finds documents about "work" and "Tuesday meetings"
  • Reality: User wants their specific session from 5 days ago

Scenario 2: Pattern Matching

  • User: "Show me all the list views I've built"
  • Vector DB: Finds documents containing "list" and "view"
  • Reality: User wants entity-table patterns used across sessions

Scenario 3: Causal Reasoning

  • User: "Why did my auth implementation fail?"
  • Vector DB: Finds documents about auth
  • Reality: User needs the error → fix → success chain

Vector search finds similar text. It doesn't understand what you're actually asking.

The Orbital Memory Alternative

Instead of vector embeddings, Almadar's AI uses structured orbital memory:

// Memory is structured as orbital schemas
interface MemoryOrbital {
userPreferences: {
namingConvention: 'PascalCase' | 'camelCase';
preferredPatterns: string[];
commonEntities: string[];
validationStyle: 'strict' | 'minimal';
};

generationSessions: {
threadId: string;
prompt: string;
skill: string;
patterns: string[];
entities: string[];
success: boolean;
createdAt: Date;
}[];

projectContext: {
appId: string;
existingEntities: string[];
conventions: string[];
domain: string;
};
}

This is memory as structured data with relationships, not text chunks with vectors.

Why Orbitals Make Better Memory

1. Temporal State Transitions

Instead of just timestamps, we capture the journey:

{
"sessionId": "sess_123",
"prompt": "Create Order entity",
"timeline": [
{ "state": "generated", "timestamp": "2025-03-01T10:00:00Z" },
{ "state": "validation_failed", "timestamp": "2025-03-01T10:02:00Z", "errors": ["Missing INIT"] },
{ "state": "fixed", "timestamp": "2025-03-01T10:05:00Z" },
{ "state": "compiled", "timestamp": "2025-03-01T10:06:00Z" }
]
}

Now the AI can answer:

  • "What errors did I fix last week?" → Find validation_failed → fixed transitions
  • "What's my success rate?" → Count generated → compiled paths
  • "Which patterns cause errors?" → Correlate patterns with failure states

2. Structured Querying

Find sessions by actual fields:

// Find all successful sessions using entity-table pattern
const sessions = await memoryManager.getUserGenerationHistory(userId, {
filter: {
success: true,
patterns: { $contains: 'entity-table' }
}
});

// Find project context
const context = await memoryManager.getProjectContext(appId);
// Returns: { existingEntities: ['Order', 'User'], conventions: [...] }

No embeddings. No similarity thresholds. Just precise queries.

3. Composable Memory

Orbitals compose like regular orbitals:

{
"name": "UserMemory",
"orbitals": [
{ "name": "PreferenceMemory", "entity": "UserPreference" },
{ "name": "GenerationMemory", "entity": "GenerationSession" },
{ "name": "ProjectMemory", "entity": "ProjectContext" }
],
"listens": [
{ "event": "SESSION_COMPLETED", "triggers": "UPDATE_MEMORY" }
]
}

Memory updates are events that trigger state transitions — just like regular Almadar apps.

How It Works

Recording a Session

// Internal: Almadar's memory system records generation sessions
const memoryManager = createMemoryManager(db);

// After completing a generation session
await memoryManager.recordGeneration(userId, {
threadId: 'thread_123',
prompt: 'Create Order entity with validation',
skill: 'kflow-orbitals',
patterns: ['entity-form', 'validation-rules'],
entities: ['Order', 'OrderItem'],
success: true,
});

// Update user preferences based on patterns used
await memoryManager.updateUserPreferences(userId, {
preferredPatterns: ['entity-form', 'validation-rules'],
commonEntities: ['Order', 'OrderItem'],
});

// Update project context
await memoryManager.updateProjectContext(appId, {
existingEntities: ['Order', 'OrderItem'],
conventions: ['use-entity-table-for-lists'],
});

Using Memory in Generation

// Internal: When the agent starts, memory is loaded automatically
const agent = createAgent({
skill: 'kflow-orbitals',
userId: 'user_123',
appId: 'app_456',
});

// Memory context is automatically injected into the system prompt

The AI receives:

## User Context

### User Preferences
- Preferred naming: PascalCase
- Preferred patterns: entity-form, validation-rules
- Commonly used entities: Order, OrderItem

### Project Context
- Project: E-Commerce Platform
- Existing entities: Order, OrderItem
- Project conventions: use-entity-table-for-lists

No retrieval needed — the relevant context is already there.

Real-World Analogy: Medical Records vs Search Engine

Vector DB approach = Google Search:

  • Search: "heart problems"
  • Get: Millions of results about hearts
  • Problem: Too generic, not YOUR heart

Orbital Memory approach = Your Medical Record:

  • Query: Patient ID 12345
  • Get: Complete history, medications, allergies, previous diagnoses
  • Advantage: Structured, accurate, personal

When your doctor treats you, they don't Google "heart symptoms." They look at your structured record.

Comparison: RAG vs Orbital Memory

AspectRAG (Vector DB)Orbital Memory
StorageText chunks + embeddingsStructured orbital schemas
QuerySimilarity searchField-based queries
TemporalTimestamps onlyState transitions
RelationshipsNone explicitEntity relations, foreign keys
ReasoningSurface similarityDeep semantic + causal
UpdatesRe-embed documentsState machine transitions
Explainability"Similarity score: 0.87""Matched field: patterns"

The Memory Orbital Schema

Here's the actual memory structure:

{
"name": "AgentMemory",
"version": "1.0.0",
"orbitals": [
{
"name": "UserPreferenceMemory",
"entity": {
"name": "UserPreference",
"fields": [
{ "name": "userId", "type": "string", "required": true },
{ "name": "namingConvention", "type": "enum", "values": ["PascalCase", "camelCase", "snake_case"] },
{ "name": "preferredPatterns", "type": "array", "items": { "type": "string" } },
{ "name": "commonEntities", "type": "array", "items": { "type": "string" } },
{ "name": "confidence", "type": "number", "default": 0.5 }
]
}
},
{
"name": "GenerationHistoryMemory",
"entity": {
"name": "GenerationSession",
"fields": [
{ "name": "threadId", "type": "string", "required": true },
{ "name": "prompt", "type": "string", "required": true },
{ "name": "skill", "type": "string", "required": true },
{ "name": "patterns", "type": "array", "items": { "type": "string" } },
{ "name": "entities", "type": "array", "items": { "type": "string" } },
{ "name": "success", "type": "boolean" },
{ "name": "createdAt", "type": "timestamp" }
]
}
}
]
}

Memory IS an Almadar schema. It uses the same patterns as any other app.

Try It: Build a Memory-Aware Agent

// Internal: Almadar's AI agent uses structured memory
// (This is how it works under the hood — not a public API)

const memoryManager = createMemoryManager(db);

// The agent is created with memory access
const agent = createAgent({
skill: 'kflow-orbitals',
workDir: '/workspace',
userId: 'user_123',
appId: 'app_456',
});

// The agent now has access to:
// - User's preferred patterns
// - Previously generated entities
// - Project conventions
// - Past successful/failed approaches

const result = await agent.run({
input: 'Create a Product entity',
});

// After completion, session data is synced to memory automatically
// Recording: what was generated, which patterns were used, success/failure

The Takeaway

Vector databases are great for finding similar text. But AI agents need structured understanding:

  • What patterns does this user prefer?
  • What entities exist in this project?
  • What worked before and what failed?
  • How did we fix previous errors?

Orbital memory provides that structure. It's not a database — it's a knowledge representation that matches how Almadar thinks about applications.

Because the best memory system isn't one that finds similar words. It's one that understands context.

Next up: Agentic Search: Teaching an AI to Remember Like a Human.

Recent Posts