Skip to Content
Getting Started

Getting Started

Engram is a memory service for AI agents. It stores complete, uncompressed conversation transcripts and makes them searchable via semantic search. Connect any MCP-compatible client and your agent remembers everything.

Why Engram?

Every existing memory product (Mem0, Zep, Supermemory) compresses conversations into extracted “memories.” Details get lost. Context disappears.

Engram takes a different approach: the conversation IS the knowledge base. Every message, tool call, and response is stored verbatim. When you search, you get back the actual conversation — not a summary of it.

Quick Start

1. Install

Homebrew (recommended):

brew install get-engram/engram/engram

npm:

npm install -g @getengram/cli

2. Create an Account

engram signup

That’s it. No email, no password, no website. You get an API key instantly and you’re ready to go.

Already have an account? Sign in instead:

engram login

If you want to access the web dashboard  or upgrade to a paid plan, attach an email to your account:

engram link

This lets you sign in at getengram.app/login  with email + password.

4. Start Auto-Capture

The Engram daemon runs in the background and automatically captures every AI conversation on your machine. No configuration needed — it watches Claude Code’s transcript files and syncs everything.

Homebrew:

brew services start engram

npm:

engram start --install

That’s it. Every Claude Code session is now captured automatically. The daemon starts on login, restarts if it crashes, and queues messages offline.

5. Search Your Memory

engram search "when did we deploy"

Or use the MCP server from any AI client:

search query: "OAuth login 403 error"

Returns matching conversation chunks with relevance scores and the original messages.

6. View Recent Activity

engram log

Shows your recent AI sessions — project name, branch, timestamp, and message count.


What Gets Captured

Claude Code writes every message, tool call, and response to JSONL transcript files at ~/.claude/projects/. When Claude Code hits its context window limit, it compresses older messages to make room — but the full uncompressed conversation stays on disk.

The Engram daemon watches these files and syncs the complete record to your account. Nothing is lost, even after Claude Code has forgotten it.

WhatCaptured?
User messagesYes
Assistant responsesYes
Tool calls and resultsYes
Thinking blocksSkipped (not useful for search)
File contents read by toolsYes
Messages after context compressionYes — from the JSONL, not the AI’s memory

Connect the MCP Server (Optional)

For AI clients that support MCP, you can also connect Engram as a server. This lets the AI search and store memories directly.

Claude Code — add to ~/.claude/settings.json:

{ "mcpServers": { "engram": { "type": "url", "url": "https://mcp.getengram.app/mcp", "headers": { "Authorization": "Bearer engram_sk_live_your_api_key_here" } } } }

Claude Desktop — add to claude_desktop_config.json:

{ "mcpServers": { "engram": { "url": "https://mcp.getengram.app/mcp", "headers": { "Authorization": "Bearer engram_sk_live_your_api_key_here" } } } }

See Integrations for all supported clients.


The real power of Engram is when your agent stores and recalls memory automatically — without you having to tell it to.

How it works

  1. On session start — The agent searches Engram for prior context relevant to your first message
  2. During the session — Important decisions, investigations, and context are stored
  3. Next session — The agent already knows what happened before

Setup for Claude Code

Add a CLAUDE.md file to your project root with these instructions:

## Engram Memory You have access to Engram as an MCP server. Use it to maintain persistent memory. ### On session start Search Engram for context relevant to the current task: search query: "<summary of what the user is asking about>" limit: 5 Include relevant results in your working context. ### During the session When important work is done, store it: create_conversation title: "<what was discussed>" agent_id: "claude-code" tags: ["<project-name>", "<topic>"] append_messages conversation_id: "<id>" messages: - role: "user" content: "<what the user asked>" - role: "assistant" content: "<what you did and why>" ### What to store - Decisions and their reasoning - Bug investigations and resolutions - User preferences and workflow - Architecture discussions

Setup for Claude Desktop

Add to your system prompt or project instructions:

You have access to Engram memory tools. At the start of each conversation, search Engram for relevant prior context. When you learn something important about the user or make a significant decision, store it in Engram so you can recall it in future conversations.

Setup for custom agents

In your agent’s system prompt:

You have persistent memory via Engram. Before responding to the user: 1. Search Engram for relevant prior conversations 2. Use any relevant results to inform your response After the conversation, store important context: 1. Create a conversation with a descriptive title and tags 2. Append the key messages from this session

What gets remembered

StoreDon’t store
Decisions and reasoningRoutine code searches
Bug investigations & fixes”Hello” / “Thanks”
User preferencesInfo already in git history
Architecture discussionsTemporary debugging output
Project context & goalsFile contents (they’re in the repo)

Example: Memory in action

Session 1 (Monday):

User: "Let's use Postgres instead of MySQL for the new service" Agent: [stores in Engram with tags: ["database", "architecture"]]

Session 2 (Thursday):

User: "Set up the database for the new service" Agent: [searches Engram → finds Monday's decision] Agent: "Setting up Postgres — we decided on Monday to use it instead of MySQL because of the JSONB support for the catalog schema."

No re-explaining. No lost context. The agent just knows.


Concepts

  • Conversations — A container for messages. Has a title, optional tags, and metadata.
  • Messages — Verbatim records of what was said. Roles: user, assistant, system, tool.
  • Chunks — Sliding windows of messages, automatically created and embedded for search.
  • Organizations — Tenant isolation. Each API key belongs to one org. Data never leaks across orgs.

Next Steps

  • API Reference — All 6 MCP tools with parameters and examples
  • Integrations — Claude Desktop, Cursor, Windsurf, custom clients
  • Concepts — How storage and search work under the hood
  • Architecture — Deep dive into how everything works
  • Use Cases — Agent memory, support history, knowledge bases
Last updated on