Have you ever had this feeling: you already explained something to your AI assistant… but it still makes the same mistake again. And again.
Or agents that rely on RAG, but the knowledge base was uploaded once and never really follows how your code and product evolve? No dynamic updates, no memory of what actually worked, what broke production, what was refactored.
That’s exactly what we’re fixing with our ODAM-powered long-term memory for AI assistants.
Instead of a static snapshot, it builds a living, human-like memory layer over your work:
• remembers what you’ve already tried and which patterns actually worked
• tracks code changes and decisions over time, not just files in isolation
• keeps context fresh, even when requirements, APIs, and architectures change
• reduces “hallucinated confidence” by grounding answers in your real history
Early results from our internal usage:
• ~80% fewer errors and misunderstandings of user intent
• ~30% faster task completion
• up to 60% fewer tokens consumed
For me, seeing these numbers in a real workflow is not just “nice metrics” — it’s a confirmation that AI can really learn from you over time, not just respond to a single prompt.
Most AI coding assistants still “forget” your project between prompts. That makes them feel magical in demos and frustrating in real work.
ODAM Memory for Cursor is an open-source extension that gives Cursor a real, project-scoped long-term memory layer:
• hooks into Cursor’s beforeSubmitPrompt / afterAgentResponse / afterAgentThought events
• stores chat + code artifacts in an external memory engine (ODAM)
• injects only the most relevant facts back into .cursor/rules/odam-memory.mdc before each prompt
• isolates memory per workspace via session_id and shows project-specific stats in the status bar
ODAM (Ontology Driven Agent Memory) is a stand-alone memory microservice that gives any LLM product selective, long-term memory using entity extraction, relationship graphs, embeddings, and memory guards. It’s been running in production inside our mental-health platform AI PSY HELP, which handles tens of thousands of sensitive conversations and requires stable long-term personalization plus strict safety constraints. The same memory engine now powers Cursor — think of it as a dedicated brain for your AI tools, specialized in remembering and updating context over time.
At a high level, the Cursor extension:
1. captures every chat & code interaction via official hooks
2. builds an evolving knowledge graph of your project in ODAM
3. injects only the relevant facts into Cursor before each prompt
A small Hook Event Server runs locally. Cursor calls the official hooks, tiny scripts forward events, and ODAM responds with compact, structured facts (entities, relationships, decisions, outcomes) instead of raw history. That keeps the context window lean and focused.
Under the hood, ODAM maintains episodic, semantic, procedural and project memory; a knowledge graph of services, modules, APIs, tools, issues and constraints; and an embedding index that retrieves only the most relevant facts. Memory enforcement, context-injection metrics and memory health indicators keep this long-term memory reliable.
ODAM did not start as a dev-tools project — it already powers AI PSY HELP and pilots in skills, employability and recovery programs, where tracking progress over months matters more than answering a single question.
Now the same core architecture supports code and project work inside Cursor.
We’re building AI PSY HELP – an AI-powered mental health assistant offering 24/7 anonymous support via voice and text, without appointments or waiting. It’s used by 100,000+ people in Ukraine, including veterans, teens, and first responders.
The AI is trained on 40,000+ hours of real psychotherapy sessions and provides individualized emotional guidance to help users manage stress, anxiety, and trauma. We partner with public institutions to deliver large-scale support and just launched a B2B program for employers.
Now preparing for EU expansion (starting with Germany), mobile app rollout, and voice interaction in Ukrainian. This is not just a chatbot – it’s scalable mental health infrastructure.
How did you get people to agree to training a chatbot on their sessions? That strikes me as extremely intimate text. Is it a "it's in the T&Cs" deal, or did you seek a separate opt-in?
I'm askng because the answer will shed light on the level of privacy "the average consumer" is comfortable with.
Great question, and I fully agree — privacy in mental health is sacred.
We don’t train on user chats directly. Instead, we collaborate with a team of 42 certified psychologists who work with us to curate anonymized case structures, decision trees, and response strategies based on real but depersonalized therapeutic experience.
These professionals help us model how psychological support is provided — without ever using actual user conversations. Our system is trained on synthesized, anonymized session data that reflects best practices, not private logs.
It’s not buried in the T&Cs — we’re very explicit about our commitment to data ethics and user safety. No session data is used for model training, and user interaction is fully confidential and never stored in a way that links it to identities.
Our goal is to make high-quality support available without compromising trust. Let me know if you’d like more technical or ethical detail — happy to share!
We combine the flexibility of an LLM with a structured layer of expert-driven decision trees and psychological frameworks. This hybrid approach lets us preserve nuance and personalization while maintaining safety, boundaries, and clinical integrity.
The decision tree layer is used both to steer responses contextually and to define escalation protocols (e.g., for suicidal ideation, PTSD triggers, or crisis states). It’s informed by standardized practices like CBT, trauma therapy, and psychological first aid, co-developed with our licensed psychologists.
So yes — think of it as an LLM augmented by a domain-specific expert system, designed for real-world psychological use.
Happy to share more if you’re interested in how we’re scaling this across multilingual and cultural contexts)
No need to go any further - to be honest it's one of those problems I'b be too risk-averse to tackle. But thanks, it was very interesting to hear about your approach.
Or agents that rely on RAG, but the knowledge base was uploaded once and never really follows how your code and product evolve? No dynamic updates, no memory of what actually worked, what broke production, what was refactored.
That’s exactly what we’re fixing with our ODAM-powered long-term memory for AI assistants.
Instead of a static snapshot, it builds a living, human-like memory layer over your work: • remembers what you’ve already tried and which patterns actually worked • tracks code changes and decisions over time, not just files in isolation • keeps context fresh, even when requirements, APIs, and architectures change • reduces “hallucinated confidence” by grounding answers in your real history
Early results from our internal usage: • ~80% fewer errors and misunderstandings of user intent • ~30% faster task completion • up to 60% fewer tokens consumed
For me, seeing these numbers in a real workflow is not just “nice metrics” — it’s a confirmation that AI can really learn from you over time, not just respond to a single prompt.
Most AI coding assistants still “forget” your project between prompts. That makes them feel magical in demos and frustrating in real work.
ODAM Memory for Cursor is an open-source extension that gives Cursor a real, project-scoped long-term memory layer: • hooks into Cursor’s beforeSubmitPrompt / afterAgentResponse / afterAgentThought events • stores chat + code artifacts in an external memory engine (ODAM) • injects only the most relevant facts back into .cursor/rules/odam-memory.mdc before each prompt • isolates memory per workspace via session_id and shows project-specific stats in the status bar
ODAM (Ontology Driven Agent Memory) is a stand-alone memory microservice that gives any LLM product selective, long-term memory using entity extraction, relationship graphs, embeddings, and memory guards. It’s been running in production inside our mental-health platform AI PSY HELP, which handles tens of thousands of sensitive conversations and requires stable long-term personalization plus strict safety constraints. The same memory engine now powers Cursor — think of it as a dedicated brain for your AI tools, specialized in remembering and updating context over time.
At a high level, the Cursor extension: 1. captures every chat & code interaction via official hooks 2. builds an evolving knowledge graph of your project in ODAM 3. injects only the relevant facts into Cursor before each prompt
A small Hook Event Server runs locally. Cursor calls the official hooks, tiny scripts forward events, and ODAM responds with compact, structured facts (entities, relationships, decisions, outcomes) instead of raw history. That keeps the context window lean and focused.
Under the hood, ODAM maintains episodic, semantic, procedural and project memory; a knowledge graph of services, modules, APIs, tools, issues and constraints; and an embedding index that retrieves only the most relevant facts. Memory enforcement, context-injection metrics and memory health indicators keep this long-term memory reliable.
ODAM did not start as a dev-tools project — it already powers AI PSY HELP and pilots in skills, employability and recovery programs, where tracking progress over months matters more than answering a single question.
Now the same core architecture supports code and project work inside Cursor.
GitHub: https://github.com/aipsyhelp/Cursor_ODAM https://odam.dev/