Lesson 12 — OpenClaw ontology Skill: Give AI Persistent Memory and a Structured Knowledge Graph (2026)
Goal: Install the ontology Skill so OpenClaw can remember structured information like people, projects, and tasks — persisting them across sessions.
Regular Context Memory vs. ontology Persistent Memory
The biggest difference isn't "how much" is remembered, but "what gets remembered and how it can be queried":
| Dimension | Regular Context Memory | ontology Persistent Memory |
|---|---|---|
| Persistence | Gone when the conversation closes | Saved permanently on local disk |
| Structure | Unstructured text | Entity + relationship graph |
| Cross-session | Not supported | Supported |
| Query method | Only by scrolling through context | Precise natural language queries |
| Data limit | Bounded by context window | Limited only by local disk |
| Relational reasoning | Weak | Can traverse entity relationships |
This Skill is the foundation for cross-session memory in OpenClaw. Once installed, important things you discuss with AI — project progress, team member information, decision records — become structured nodes that can be precisely retrieved the next time you open a conversation.
Step 1: Install the ontology Skill
/install @oswalpalash/ontology
Verify after installation:
pnpm openclaw skills list
# ontology should appear in the listThe Skill creates a knowledge graph storage directory locally:
~/.openclaw/workspace/skills/ontology/
├── SKILL.md ← Skill main logic
└── graph.json ← Knowledge graph data (JSON format, can be viewed directly)
Step 2: Create Entities
ontology supports five built-in entity types:
Person — People (colleagues, clients, contacts)
Project — Projects (code repos, product lines, contracts)
Task — Tasks (to-dos, milestones, bugs)
Event — Events (meetings, launches, deadlines)
Document — Documents (reports, contracts, notes)
Create a Person Entity
Remember: Alice Chen, backend engineer, responsible for the payment module, prefers Slack communication, timezone UTC-8
AI automatically parses and stores in the graph:
{
"type": "Person",
"name": "Alice Chen",
"attributes": {
"role": "backend engineer",
"domain": "payment module",
"contact": "Slack",
"timezone": "UTC-8"
}
}Create a Project Entity
Remember project: Payment System Refactor, due 2026-06-30, owner Alice Chen, status in-progress
Create a Task Entity
Add task: Complete payment gateway migration spec doc by Friday, priority high, linked to project "Payment System Refactor"
It's that simple — describe in natural language, and ontology automatically extracts and stores the structured information.
Step 3: Query Existing Entities
After installing ontology, you can query previously stored information across sessions using natural language:
Who is Alice Chen? What is she responsible for?
What are the high-priority tasks this week?
What's the current status of the "Payment System Refactor" project?
List all projects and tasks related to Alice Chen
In a self-hosted AI agent scenario with long-term memory, this kind of query capability is especially valuable — no matter how long ago the last conversation was, AI can precisely retrieve from the graph rather than relying on a fuzzy "I think we talked about this before."
Step 4: Entity Linking
The most powerful feature of ontology is building relationships between entities:
Link: Alice Chen is responsible for project "Payment System Refactor"; her contact is Bob Liu (product manager)
Mark: task "Payment Gateway Migration Doc" belongs to project "Payment System Refactor", assigned to Alice Chen
Once links are established, you can do graph traversal queries:
What are all the people and tasks related to "Payment System Refactor"?
Which projects is Bob Liu involved in, and who are his direct collaborators?
This kind of relational reasoning is impossible with regular context memory alone.
Step 5: Update and Delete Entities
Update: Alice Chen has completed the payment gateway migration; change task status to "completed"
Update: Payment System Refactor project deadline pushed back to 2026-08-15
Delete entity: remove the "Payment Gateway Migration Doc" task
Step 6: Combine with self-improving-agent
ontology vs. self-improving-agent: they complement each other — neither replaces the other.
- self-improving-agent (Lesson 09): records AI behavioral preferences and operational experience ("use pnpm not npm")
- ontology: records business knowledge and real-world information (people, projects, tasks, events)
Install both for a complete memory system:
/install @pskoett/self-improving-agent
/install @oswalpalash/ontology
Real-world effect:
You: Update Alice's task progress
AI: (retrieves Alice Chen's tasks from ontology)
Alice currently has 3 in-progress tasks:
1. Payment Gateway Migration Doc (high priority, due Friday)
2. ...
Which task would you like to update?
AI both knows "who Alice is" (ontology) and knows "you prefer concise responses" (self-improving-agent).
FAQ
Will AI still remember what we talked about after closing OpenClaw?
After installing the ontology Skill, information you explicitly store in the graph is permanently saved in ~/.openclaw/workspace/skills/ontology/graph.json regardless of whether the conversation is closed. The next time you open OpenClaw in a new session, AI can still query this information via natural language. Note: only content explicitly stored in the graph persists — ordinary chat content still disappears when the session ends.
What entity types can the ontology Skill store?
ontology has five built-in types: Person, Project, Task, Event, and Document. Each type can carry custom attributes (like role, status, due date), and entities can have relationships with each other, supporting graph traversal queries. If the built-in types aren't sufficient, you can also describe custom types in natural language and ontology will automatically infer the classification.
What's the difference between ontology and self-improving-agent?
They solve different memory problems. self-improving-agent records AI behavioral patterns and operational experience — like "this project uses pnpm" or "don't list sources in summaries" — focusing on improving how AI works. ontology records real-world business knowledge — like "Alice is a backend engineer responsible for the payment module" — focusing on helping AI understand your work environment. Both Skills can be installed simultaneously for compounding, complementary effects.
Where is the knowledge graph data stored? Will it be uploaded to the cloud?
All data is saved locally in ~/.openclaw/workspace/skills/ontology/graph.json in standard JSON format — you can view and back it up directly. OpenClaw is a fully self-hosted solution; data doesn't pass through any cloud server and won't be uploaded to Anthropic or any third-party platform. To share the knowledge graph across multiple devices, manually copy the graph.json file or put it under git version control.