A cognitive agent orchestration platform built on the principle that evolution already solved intelligence — we just need to read the blueprints.
Overview
cornOS is a native macOS cognitive agent platform that deploys AI assistants which genuinely learn and improve over time. Unlike conventional chatbots that reset every conversation, cornOS agents develop persistent memory, consolidate knowledge through bio-mimetic sleep cycles, and coordinate as specialised teams.
Each agent is a fully autonomous entity with its own personality, memory, expertise, communication channels, and tool access. Agents can delegate tasks to specialists, operate across email, messenger, and webhooks, and run scheduled workflows — all while building institutional knowledge that compounds over time.
The platform is LLM-agnostic, supporting Claude, OpenAI-compatible models, and local inference. It's built in Swift as a native macOS menu bar application — lightweight, always-on, and deeply integrated with the operating system.
Menu bar application built in Swift. Always-on, lightweight, with Keychain credential storage and AppleScript automation.
Each agent can use a different provider — Claude API, OpenAI-compatible endpoints, or local models. Switch without changing agent logic.
Every agent is individually tuneable — system prompts, tool budgets, memory limits, trust policies, dream parameters, channel bindings, and more.
Architecture
cornOS
├── Message Ingestion
│ ├── AIMessagePoller // Messenger API (60s interval)
│ ├── ChannelPoller // IMAP, webhooks (30-120s)
│ └── WebhookServer // Inbound HTTP triggers
│
├── Agent Pool
│ ├── AIAgent // Per-agent config (UUID-based)
│ ├── AIAgentManager // Lifecycle, CRUD, persistence
│ └── AgentStateManager // Processing state, delegation chains
│
├── Cognitive Core
│ ├── ClaudeInvoker // Prompt assembly, provider routing
│ ├── AgentMemoryManager // Per-user + global memory
│ ├── PlanParser // [STEP:tool:params] extraction
│ └── ToolRouter // 16 executor adapters
│
├── Subconscious
│ ├── MelatoninTracker // Activity accumulation
│ ├── DreamEngine // Sleep cycle orchestration
│ └── SubconsciousEngine // Dream execution & memory ops
│
├── Knowledge
│ ├── RAGManager // Document intelligence
│ ├── SkillLibraryManager // Reusable workflows
│ └── ConversationStore // Message history
│
└── Governance
├── ApprovalManager // Authorization gates
├── TrustPolicyEngine // 5-tier trust model
└── ScheduledTaskManager // Cron-based autonomy
Message arrives via Messenger poll, IMAP fetch, or webhook. Sender authenticated against trust policies.
Last 20 messages fetched. Per-user + global memories injected. RAG documents surfaced if relevant.
LLM generates a response with optional plan steps. Plans parsed into sequential tool calls.
ToolRouter dispatches each step to the correct executor. Approval gates checked. Results accumulated.
Final reply sent back via the originating channel. Memory updated. Melatonin incremented.
The Dream Engine
Just as the human brain cycles through sleep stages to consolidate learning, cornOS agents accumulate cognitive pressure and enter dream states that transform raw experience into distilled knowledge.
Activity generates digital melatonin. When it crosses a threshold — and the agent has been idle for 30+ minutes with a 2-hour cooldown since the last dream — consolidation begins.
Melatonin caps at 100.0. Rates are configurable per agent — allowing research into how different "metabolic profiles" affect learning outcomes.
Quick housekeeping pass. Prunes 3-5 stale or redundant memories, notes 2-3 emerging patterns as global memories.
Thorough consolidation. Merges duplicate entries across users, compresses verbose memories into concise form, strengthens important patterns. Targets 10-20 memory operations.
Creative cross-user analysis. Discovers patterns spanning multiple users, generates 3-5 insight globals prefixed with "pattern_" or "insight_". Finds behavioural trends, shared interests, recurring themes.
Directed task execution with custom directives. The agent is given a specific prompt to process during the dream cycle — enabling targeted reflection, strategic planning, or creative exploration.
During dreams, the LLM generates structured memory operations that are automatically applied:
[MEMORY:SAVE_USER:userId:key:value]
[MEMORY:SAVE_GLOBAL:key:value]
[MEMORY:FORGET_USER:userId:key]
[MEMORY:FORGET_GLOBAL:key]
[DREAM:SUMMARY:one sentence summary of the dream]
Every dream is recorded with metrics — memories created, modified, pruned, and melatonin consumed — enabling longitudinal analysis of how agents' knowledge evolves.
Omnichannel Communication
cornOS agents maintain a unified identity across all communication channels, with trust-aware permissions that scale with the relationship.
Full IMAP inbound + SMTP/SendGrid/Mailgun outbound. MIME parsing, attachment handling, thread awareness. Configurable polling intervals (min 30s).
Real-time web chat via DCCD service. Full conversation history, 60s polling, complete tool access for authenticated users.
Inbound HTTP triggers for system integration. Trust-level gating, async response delivery, tool parameter validation.
Meta Cloud API integration. Phone-based trust, rich media support, business messaging compliance.
Twilio integration for quick-response interactions. Lightweight tool subset for mobile-first use cases.
Bot API integration. Group and direct messaging, inline commands, rich formatting.
Inspired by how social species develop trust through repeated interaction — from stranger to ally.
No access. Message ignored.
Conversation only. No tool access.
Read-only document access via RAG.
Full tool access enabled.
All tools + administrative privileges.
Tool Ecosystem
Every tool is routed through a protocol-based executor system. Built-in tools cover the most common operations; the Model Context Protocol (MCP) opens the door to anything else.
Intelligence Layer
Every agent maintains two memory stores. User-specific memories capture per-person preferences, context, and history. Global memories hold cross-user patterns, institutional knowledge, and dream-generated insights. Both are injected into every conversation — configurable counts, default 5 each.
Per-agent document collections with three search modes: local TF-IDF (no API required), OpenAI embeddings (1536-dim semantic search), or local embeddings via Ollama. Supports PDF, DOCX, HTML, TXT, and images. Automatic chunking with page-number tracking.
Reusable workflow recipes with names, summaries, and detailed instructions. Skills can include plan templates for deterministic execution. A global library is shared across agents; each agent can also define custom skills. Invoked via the use_skill tool.
Multi-agent teams with role-based access. Shared file storage with traversal prevention, shared memory pools, and desk instructions (SOPs injected into prompts). Workspace-level MCP servers grant role-gated tool access.
Condition-based approval gates: match by tool name, prefix, parameter thresholds, new contacts, or unconditionally. Configurable urgency (blocking/standard/informational), timeout actions (auto-reject/approve/skip), and multi-target notifications.
Per-agent, per-category structured logging — dreams, inbound/outbound channels, errors, usage. Token counting and cost tracking per provider. Dream records with memory operation metrics. Status bar indicators for processing state and delegation chains.
Execution Model
cornOS agents don't just generate text — they generate executable plans.
When an agent needs to take action, it produces structured plans that are parsed and executed sequentially. Each step is a tool call with typed parameters, routed through the approval system before execution.
[PLAN:START]
[STEP:1:TOOL:web_search:query=latest developments in bio-mimetic AI]
[STEP:2:TOOL:rag_search:query=our previous research on neural consolidation]
[STEP:3:TOOL:memory_recall:key=user_research_interests]
[STEP:4:TOOL:send_email:to=researcher@lab.org|subject=Summary|body=...]
[STEP:5:TOOL:memory_save:key=last_research_brief|value=bio-mimetic AI update sent]
[PLAN:END]
Steps run in order. Each step's output is available to subsequent steps. Failed steps can halt or skip based on configuration.
Configurable max tool calls per message (default 5). Prevents runaway execution while allowing complex multi-step workflows.
Sensitive operations pause for human approval. Rules match on tool name, parameter values, or contact novelty.
Technology
Primary language. Type-safe, performant, native.
Modern UI framework with native macOS integration.
Networking layer with custom timeout management.
Low-level socket connections for IMAP/SMTP.
Secure credential storage for all API keys and tokens.
Document text extraction and image OCR.
MCP protocol for unlimited tool extensibility.
JSON persistence for agents, memories, dreams, and configs.
Whether you're interested in the platform for research, business automation, or just want to geek out about bio-mimetic AI — let's talk.