Initial commit: Clawdbot Memory System installer

One-command persistent memory for Clawdbot.
Prevents context amnesia during compaction with:
- Two-layer memory: Markdown source of truth + SQLite vector search
- Pre-compaction flush to save context before it's lost
- Semantic search across all memory files
- Daily logs, research intel, and project tracking templates
- Interactive installer with dry-run and uninstall support
This commit is contained in:
Jake Shore 2026-02-10 13:35:36 -05:00
commit cb28c2649f
13 changed files with 1770 additions and 0 deletions

28
.gitignore vendored Normal file
View File

@ -0,0 +1,28 @@
# OS
.DS_Store
Thumbs.db
# Node
node_modules/
package-lock.json
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# Database (these are local to each install)
*.db
*.db-wal
*.db-shm
# Backups created by installer
*.pre-memory-backup
*.pre-uninstall-backup
*.bak
# Secrets
.env
.env.local

308
ARCHITECTURE.md Normal file
View File

@ -0,0 +1,308 @@
# Architecture
Technical details of how the Clawdbot Memory System works.
---
## System Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ CLAWDBOT AGENT │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ Chat Session │ │ Tool: write │ │ Tool: memory_ │ │
│ │ │ │ (file ops) │ │ search │ │
│ └───────┬───────┘ └──────┬───────┘ └────────┬─────────┘ │
│ │ │ │ │
└──────────┼───────────────────┼──────────────────────┼─────────────┘
│ │ │
│ ┌──────▼───────┐ ┌──────▼───────┐
│ │ Markdown │ │ SQLite + │
│ │ Files │◄──────│ sqlite-vec │
│ │ (source of │ index │ Vector Store │
│ │ truth) │───────► │
│ └──────────────┘ └───────────────┘
│ ▲
│ │
└───────────────────┘
Agent writes memories
during session
```
---
## Write Flow
When the agent decides to store a memory:
```
Agent decides to remember something
┌─────────────────┐
│ Write to file │
│ memory/YYYY-MM- │
│ DD.md │
└────────┬────────┘
┌─────────────────┐
│ File watcher │ ← Clawdbot watches memory/ for changes
│ detects change │ (debounced — waits for writes to settle)
└────────┬────────┘
┌─────────────────┐
│ Chunking │ ← File split into meaningful chunks
│ (by section/ │ (headers, paragraphs, list items)
│ paragraph) │
└────────┬────────┘
┌─────────────────┐
│ Embedding │ ← Each chunk → embedding vector
│ Provider │ (OpenAI / Gemini / Local GGUF)
│ │
│ text-embedding- │
│ 3-small (1536d) │
│ or │
│ gemini-embed- │
│ ding-001 │
│ or │
│ local GGUF model │
└────────┬────────┘
┌─────────────────┐
│ SQLite + │ ← Vectors stored in sqlite-vec
│ sqlite-vec │ Alongside original text chunks
│ │ and metadata (file, date, section)
│ memory.db │
└─────────────────┘
```
---
## Search Flow
When the agent needs to recall something:
```
Agent: "What did we decide about the API rate limits?"
┌─────────────────┐
│ memory_search │ ← Tool invoked automatically
│ tool called │ (or agent calls it explicitly)
└────────┬────────┘
┌─────────────────┐
│ Query embedding │ ← Same provider as index
│ generated │ "API rate limits decision"
└────────┬────────┘ → [0.23, -0.11, 0.87, ...]
├─────────────────────────┐
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ Vector search │ │ Keyword search │
│ (cosine sim) │ │ (BM25 / FTS) │
│ │ │ │
│ Finds semanti- │ │ Finds exact │
│ cally similar │ │ keyword matches │
│ chunks │ │ │
└────────┬────────┘ └────────┬────────┘
│ │
└────────────┬────────────┘
┌─────────────────┐
│ Hybrid merge │ ← Combines both result sets
& ranking │ Deduplicates, re-ranks
└────────┬────────┘
┌─────────────────┐
│ Top N chunks │ ← Relevant memory fragments
│ returned │ injected into agent context
└────────┬────────┘
Agent has full context
to answer the question 🎉
```
---
## Pre-Compaction Flush Flow
The safety net that prevents amnesia:
```
Context Window
┌──────────────────────────────────────────┐
│ System prompt │
│ AGENTS.md │
│ Memory search results │
│ ───────────────────────────────── │
│ Old messages ← these get compacted │
│ ... │
│ ... │
│ Recent messages │
│ ───────────────────────────────── │
│ Reserve tokens (floor: 20,000) │
└──────────────────────────────────────────┘
Token count approaches limit
(contextWindow - reserveTokensFloor
- softThresholdTokens)
┌───────────────────────┐
│ Clawdbot triggers │
│ memory flush │
│ │
│ Silent system prompt: │
│ "Session nearing │
│ compaction. Store │
│ durable memories." │
│ │
│ Silent user prompt: │
│ "Write lasting notes │
│ to memory/; reply │
│ NO_REPLY if nothing │
│ to store." │
└───────────┬───────────┘
┌───────────────────────┐
│ Agent writes to disk │
│ │
│ • Current work status │
│ • Pending decisions │
│ • Important context │
│ • Where we left off │
└───────────┬───────────┘
┌───────────────────────┐
│ File watcher triggers │
│ re-index │
└───────────┬───────────┘
┌───────────────────────┐
│ Compaction happens │
│ (old messages removed/ │
│ summarized) │
└───────────┬───────────┘
Memories safe on disk ✅
Indexed and searchable ✅
Agent can recall later ✅
```
---
## Storage Layout
```
~/.clawdbot/
├── clawdbot.json ← Config with memorySearch settings
├── workspace/ ← Agent workspace (configurable)
│ ├── AGENTS.md ← Agent instructions (with memory habits)
│ ├── MEMORY.md ← Curated long-term memory (optional)
│ │
│ ├── memory/ ← Daily logs & research intel
│ │ ├── 2026-01-15.md ← Daily log
│ │ ├── 2026-01-16.md
│ │ ├── 2026-02-10.md ← Today
│ │ ├── project-x-research-intel.md
│ │ ├── TEMPLATE-daily.md ← Reference template
│ │ ├── TEMPLATE-research-intel.md
│ │ └── TEMPLATE-project-tracking.md
│ │
│ └── ... (other workspace files)
└── agents/
└── main/
└── agent/
└── memory/ ← Vector index (managed by Clawdbot)
└── memory.db ← SQLite + sqlite-vec database
```
---
## Config Structure
The memory system config lives in `clawdbot.json` under `agents.defaults.memorySearch`:
```jsonc
{
"agents": {
"defaults": {
// ... other config (model, workspace, etc.) ...
"memorySearch": {
// Embedding provider: "openai" | "gemini" | "local"
"provider": "openai",
// Model name (provider-specific)
"model": "text-embedding-3-small",
// Remote provider settings (OpenAI / Gemini)
"remote": {
"apiKey": "sk-...", // Optional if using env var
"baseUrl": "...", // Optional custom endpoint
"headers": {} // Optional extra headers
},
// Additional paths to index (beyond memory/ and MEMORY.md)
"extraPaths": ["../team-docs"],
// Fallback provider if primary fails
"fallback": "local" // "openai" | "gemini" | "local" | "none"
},
// Pre-compaction memory flush (enabled by default)
"compaction": {
"reserveTokensFloor": 20000,
"memoryFlush": {
"enabled": true,
"softThresholdTokens": 4000
}
}
}
}
}
```
---
## Data Flow Summary
```
WRITE PATH READ PATH
────────── ─────────
Agent writes note Agent needs context
│ │
▼ ▼
memory/YYYY-MM-DD.md memory_search("query")
│ │
▼ ▼
File watcher Embed query
│ │
▼ ▼
Chunk + embed Vector + keyword search
│ │
▼ ▼
Store in SQLite Return top chunks
│ │
▼ ▼
Index updated ✅ Context restored ✅
```

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2026 BusyBee3333
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

190
MIGRATION.md Normal file
View File

@ -0,0 +1,190 @@
# Migration Guide
How to move to the Clawdbot Memory System from your current setup.
---
## Scenario 1: No Memory System → Full System
**You have:** A fresh Clawdbot install, or one where you never set up memory files.
**What happens:**
- The installer creates `memory/` directory with templates
- Configures vector search in `clawdbot.json`
- Adds memory habits to `AGENTS.md`
- Creates today's first daily log
**What to do:**
```bash
bash <(curl -sL https://raw.githubusercontent.com/BusyBee3333/clawdbot-memory-system/main/install.sh)
```
That's it. You're done.
**After install:**
Your agent will start writing daily logs automatically. Give it a day or two and you'll see `memory/` filling up with daily context.
---
## Scenario 2: Existing MEMORY.md Only → Full System
**You have:** A `MEMORY.md` file in your workspace with curated long-term memories.
**What happens:**
- Your `MEMORY.md` is **preserved exactly as-is** — nothing is changed or moved
- `MEMORY.md` gets **indexed** into vector search alongside the new daily logs
- The installer adds the `memory/` directory for daily logs (separate from your curated MEMORY.md)
- Your agent can now search both the curated file AND daily logs semantically
**What changes:**
| Before | After |
|--------|-------|
| Agent reads MEMORY.md at session start | Agent reads MEMORY.md at session start **AND** searches it semantically |
| No daily logs | Daily logs in `memory/YYYY-MM-DD.md` |
| No search across history | Semantic search across all memory files |
| Context lost on compaction | Pre-compaction flush saves context |
**What to do:**
```bash
bash <(curl -sL https://raw.githubusercontent.com/BusyBee3333/clawdbot-memory-system/main/install.sh)
```
The installer detects your existing `MEMORY.md` and includes it in the index.
**Best practice after migration:**
Keep using `MEMORY.md` for curated, high-signal facts (preferences, key decisions, identity). Use `memory/YYYY-MM-DD.md` for day-to-day session logs. They complement each other:
- `MEMORY.md` = "who I am and what matters" (manually curated)
- `memory/*.md` = "what happened and when" (written by the agent during sessions)
---
## Scenario 3: Existing Daily Logs → Full System
**You have:** A `memory/` directory with some daily log files already.
**What happens:**
- All your existing `.md` files in `memory/` are **preserved**
- They all get **indexed** into vector search
- Templates are added (won't overwrite existing files with the same name)
- Config and AGENTS.md are patched
**What to do:**
```bash
bash <(curl -sL https://raw.githubusercontent.com/BusyBee3333/clawdbot-memory-system/main/install.sh)
```
The installer will show you how many existing files it found:
```
Found 23 existing memory files in /path/to/workspace/memory
These will be preserved and indexed
```
---
## Scenario 4: Manual Memory Search Setup → This System
**You have:** You've already configured `memorySearch` in `clawdbot.json` manually.
**What happens:**
- The installer detects your existing `memorySearch` config
- It asks if you want to overwrite or keep your current settings
- If you keep: only templates, AGENTS.md patch, and re-index are done
- If you overwrite: your old config is backed up first
**What to do:**
```bash
bash <(curl -sL https://raw.githubusercontent.com/BusyBee3333/clawdbot-memory-system/main/install.sh)
```
When prompted about existing config, choose based on your preference.
---
## How Pre-Compaction Flush Prevents Amnesia
This is the most important piece for people who've been losing context.
### The Problem
```
Session starts → You work for hours → Context fills up →
Compaction triggers → Old messages summarized/removed →
Agent has a vague summary but lost all the details →
"What were we working on?" → Agent has no idea 😞
```
### The Solution
```
Session starts → You work for hours → Context fills up →
┌───────────────┐
│ Pre-compaction │
│ flush triggers │
└───────┬───────┘
Agent writes ALL
important context to
memory/YYYY-MM-DD.md
Compaction happens
(details removed from
context window)
But memories are ON DISK
and INDEXED for search
"What were we working on?"
→ Agent searches memory →
Full context restored 🎉
```
### How It Works Technically
Clawdbot has a built-in feature: `compaction.memoryFlush`. When enabled (it is by default), it sends a silent prompt to the agent before compaction saying "store durable memories now." The agent then writes everything important to disk.
The memory system makes this work by:
1. **Having a place to write** (`memory/YYYY-MM-DD.md`)
2. **Having instructions to follow** (the AGENTS.md memory habits)
3. **Having a way to retrieve** (vector search index)
Without all three, the flush either doesn't happen, writes to nowhere useful, or the agent can't find what it wrote later.
---
## Storage Layout After Migration
```
~/.clawdbot/workspace/
├── AGENTS.md ← Updated with memory habits
├── MEMORY.md ← Preserved if it existed
├── memory/
│ ├── 2026-01-15.md ← Your existing logs (preserved)
│ ├── 2026-01-16.md ← ...
│ ├── 2026-02-10.md ← Today's log (created by installer)
│ ├── TEMPLATE-daily.md ← Reference template
│ ├── TEMPLATE-research-intel.md ← Reference template
│ └── TEMPLATE-project-tracking.md ← Reference template
└── ...
```
---
## Rollback
If anything goes wrong, the installer creates backups:
- `clawdbot.json.pre-memory-backup` — your original config
- To restore: `cp ~/.clawdbot/clawdbot.json.pre-memory-backup ~/.clawdbot/clawdbot.json`
Or use the uninstaller:
```bash
bash <(curl -sL https://raw.githubusercontent.com/BusyBee3333/clawdbot-memory-system/main/install.sh) --uninstall
```
This removes config changes but **never deletes your memory files**.

349
README.md Normal file
View File

@ -0,0 +1,349 @@
<![CDATA[# 🧠 Clawdbot Memory System
**One-command persistent memory for Clawdbot — never lose context to compaction again.**
> "Why does my agent forget everything after a long session?"
Because Clawdbot compacts old context to stay within its context window. Without a memory system, everything that was compacted is gone. This repo fixes that permanently.
---
## What This Is
A **two-layer memory system** for Clawdbot:
1. **Markdown files** (source of truth) — Daily logs, research intel, project tracking, and durable notes your agent writes to disk
2. **SQLite vector search** (retrieval layer) — Semantic search index that lets your agent find relevant memories even when wording differs
Your agent writes memories to plain Markdown. Those files get indexed into a vector store. When the agent needs context, it searches semantically and finds what it needs — even across sessions, even after compaction.
## Quick Install
```bash
bash <(curl -sL https://raw.githubusercontent.com/BusyBee3333/clawdbot-memory-system/main/install.sh)
```
That's it. The installer will:
- ✅ Detect your Clawdbot installation
- ✅ Create the `memory/` directory with templates
- ✅ Patch your `clawdbot.json` with memory search config (without touching anything else)
- ✅ Add memory habits to your `AGENTS.md`
- ✅ Build the initial vector index
- ✅ Verify everything works
### Preview First (Dry Run)
```bash
bash <(curl -sL https://raw.githubusercontent.com/BusyBee3333/clawdbot-memory-system/main/install.sh) --dry-run
```
### Uninstall
```bash
bash <(curl -sL https://raw.githubusercontent.com/BusyBee3333/clawdbot-memory-system/main/install.sh) --uninstall
```
---
## How It Works
```
┌─────────────────────────────────────────────────────────┐
│ YOUR AGENT SESSION │
│ │
│ Agent writes notes ──→ memory/2026-02-10.md │
│ Agent stores facts ──→ MEMORY.md │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ File Watcher │ (debounced) │
│ └──────┬───────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ Embedding Provider │ │
│ │ (OpenAI / Gemini / │ │
│ │ Local GGUF) │ │
│ └───────────┬───────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ SQLite + sqlite-vec │ │
│ │ Vector Index │ │
│ └───────────┬───────────┘ │
│ │ │
│ Agent asks ──────────┤ │
│ "what did we decide │ │
│ about the API?" ▼ │
│ ┌───────────────────────┐ │
│ │ Hybrid Search │ │
│ │ (semantic + keyword) │ │
│ └───────────┬───────────┘ │
│ │ │
│ ▼ │
│ Relevant memory chunks │
│ injected into context │
└─────────────────────────────────────────────────────────┘
```
### Pre-Compaction Flush
This is the secret sauce. When your session nears its context limit:
```
Session approaching limit
┌─────────────────────┐
│ Pre-compaction ping │ ← Clawdbot silently triggers this
│ "Store durable │
│ memories now" │
└──────────┬────────────┘
Agent writes lasting notes
to memory/YYYY-MM-DD.md
Context gets compacted
(old messages removed)
BUT memories are on disk
AND indexed for search
Agent can find them anytime 🎉
```
---
## Embedding Provider Options
The installer will ask which provider you want:
| Provider | Speed | Cost | Setup |
|----------|-------|------|-------|
| **OpenAI** (recommended) | ⚡ Fast | ~$0.02/million tokens | API key required |
| **Gemini** | ⚡ Fast | Free tier available | API key required |
| **Local** | 🐢 Slower first run | Free | Downloads GGUF model (~100MB) |
**OpenAI** (`text-embedding-3-small`) is recommended for the best experience. It's extremely cheap and fast.
**Gemini** (`gemini-embedding-001`) works great and has a generous free tier.
**Local** uses `node-llama-cpp` with a GGUF model — fully offline, no API key needed, but the first index build is slower.
---
## Manual Setup (Alternative)
If you prefer to set things up yourself instead of using the installer:
### 1. Create the memory directory
```bash
mkdir -p ~/.clawdbot/workspace/memory
```
### 2. Add memory search config to clawdbot.json
Open `~/.clawdbot/clawdbot.json` and add `memorySearch` inside `agents.defaults`:
**For OpenAI:**
```json
{
"agents": {
"defaults": {
"memorySearch": {
"provider": "openai",
"model": "text-embedding-3-small"
}
}
}
}
```
**For Gemini:**
```json
{
"agents": {
"defaults": {
"memorySearch": {
"provider": "gemini",
"model": "gemini-embedding-001"
}
}
}
}
```
**For Local:**
```json
{
"agents": {
"defaults": {
"memorySearch": {
"provider": "local"
}
}
}
}
```
### 3. Set your API key (if using OpenAI or Gemini)
For OpenAI, set `OPENAI_API_KEY` in your environment or in `clawdbot.json` under `models.providers.openai.apiKey`.
For Gemini, set `GEMINI_API_KEY` in your environment or in `clawdbot.json` under `models.providers.google.apiKey`.
### 4. Build the index
```bash
clawdbot memory index --verbose
```
### 5. Verify
```bash
clawdbot memory status --deep
```
### 6. Restart the gateway
```bash
clawdbot gateway restart
```
---
## What Gets Indexed
By default, Clawdbot indexes:
- `MEMORY.md` — Long-term curated memory
- `memory/*.md` — Daily logs and all memory files
All files must be Markdown (`.md`). The index watches for changes and re-indexes automatically.
### Adding Extra Paths
Want to index files outside the default layout? Add `extraPaths`:
```json
{
"agents": {
"defaults": {
"memorySearch": {
"extraPaths": ["../team-docs", "/path/to/other/notes"]
}
}
}
}
```
---
## Troubleshooting
### "No API key found for provider openai/google"
You need to set your embedding API key. Either:
- Set the environment variable (`OPENAI_API_KEY` or `GEMINI_API_KEY`)
- Or add it to `clawdbot.json` under `models.providers`
### "Memory search stays disabled"
Run `clawdbot memory status --deep` to see what's wrong. Common causes:
- No embedding provider configured
- API key missing or invalid
- No `.md` files in `memory/` directory
### Index not updating
Run a manual reindex:
```bash
clawdbot memory index --force --verbose
```
### Agent still seems to forget things
Make sure your `AGENTS.md` includes memory instructions. The agent needs to be told to:
1. Search memory before answering questions about prior work
2. Write important things to daily logs
3. Flush memories before compaction
The installer handles this automatically.
### Installer fails with "jq not found"
The installer needs `jq` for safe JSON patching. Install it:
```bash
# macOS
brew install jq
# Ubuntu/Debian
sudo apt-get install jq
# Or download from https://jqlang.github.io/jq/
```
---
## FAQ
### Why does my agent forget everything?
Clawdbot uses a context window with a token limit. When a session gets long, old messages are **compacted** (summarized and removed) to make room. Without a memory system, the details in those old messages are lost forever.
This memory system solves it by:
1. Writing important context to files on disk (survives any compaction)
2. Indexing those files for semantic search (agent can find them later)
3. Flushing memories right before compaction happens (nothing falls through the cracks)
### How is this different from just having MEMORY.md?
`MEMORY.md` alone is a single file that the agent reads at session start. It works for small amounts of info, but:
- It doesn't scale (gets too big to fit in context)
- It's not searchable (agent has to read the whole thing)
- Daily details get lost (you can't put everything in one file)
This system adds **daily logs** (unlimited history) + **vector search** (find anything semantically) + **pre-compaction flush** (automatic safety net).
### Does this cost money?
- **Local embeddings**: Free (but slower)
- **OpenAI embeddings**: ~$0.02 per million tokens (essentially free for personal use)
- **Gemini embeddings**: Free tier available
For reference, indexing 100 daily logs costs about $0.001 with OpenAI.
### Can I use this with multiple agents?
Yes. Each agent uses the same workspace `memory/` directory by default. You can scope with `--agent <id>` for commands.
### Is my data sent to the cloud?
Only if you use remote embeddings (OpenAI/Gemini). The embedding vectors are generated from your text, but they can't be reversed back to the original text. If you want full privacy, use `local` embeddings — everything stays on your machine.
### Can I run the installer multiple times?
Yes! It's idempotent. It checks for existing files and config before making changes, and backs up your config before patching.
---
## Architecture
See [ARCHITECTURE.md](ARCHITECTURE.md) for detailed diagrams.
## Migrating from Another Setup
See [MIGRATION.md](MIGRATION.md) for step-by-step migration guides.
## License
MIT — see [LICENSE](LICENSE)
---
**Built for the Clawdbot community** by people who got tired of explaining things to their agent twice.
]]>

View File

@ -0,0 +1,35 @@
## Memory System (auto-added by clawdbot-memory-system installer)
### Mandatory Memory Recall
Before answering ANY question about prior work, decisions, or context from previous sessions:
1. Use `memory_search` to find relevant memories
2. Check today's daily log: `memory/YYYY-MM-DD.md`
3. Check yesterday's log if today's is sparse
4. Only say "I don't recall" if memory search returns nothing
### Daily Memory Log
- Write to `memory/YYYY-MM-DD.md` throughout the session
- Log: decisions made, user preferences discovered, project progress, action items, blockers
- Be specific — future-you needs exact details, not vague summaries
- Include: names, URLs, version numbers, error messages, config values — anything that would be painful to re-discover
### Pre-Compaction Flush
When you sense a session is getting long or receive a compaction warning:
- Write ALL important unsaved context to today's daily log immediately
- Include: what we were working on, where we left off, any pending decisions, partial results
- This is your last chance before amnesia — be thorough, not brief
### Research Intel System
For ongoing research/monitoring projects:
- Store in: `memory/{project}-research-intel.md`
- Current week's detailed intel at TOP of file
- Compressed 1-3 sentence summaries of previous weeks at BOTTOM
- When asked about action items or strategy, check active research intel files first
### Git Backup Habit
End of each session or major milestone:
```bash
cd ~/.clawdbot/workspace && git add -A && git commit -m "session backup: $(date +%Y-%m-%d)" && git push
```
This keeps identity, memory, and progress backed up offsite.

View File

@ -0,0 +1,10 @@
{
"agents": {
"defaults": {
"memorySearch": {
"provider": "gemini",
"model": "gemini-embedding-001"
}
}
}
}

View File

@ -0,0 +1,9 @@
{
"agents": {
"defaults": {
"memorySearch": {
"provider": "local"
}
}
}
}

View File

@ -0,0 +1,10 @@
{
"agents": {
"defaults": {
"memorySearch": {
"provider": "openai",
"model": "text-embedding-3-small"
}
}
}
}

763
install.sh Executable file
View File

@ -0,0 +1,763 @@
<![CDATA[#!/usr/bin/env bash
set -euo pipefail
# ============================================================================
# Clawdbot Memory System — One-Command Installer
# https://github.com/BusyBee3333/clawdbot-memory-system
#
# Usage:
# bash install.sh # Interactive install
# bash install.sh --dry-run # Preview changes without applying
# bash install.sh --uninstall # Remove memory system config
# ============================================================================
VERSION="1.0.0"
# --- Colors & Formatting ---
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
CYAN='\033[0;36m'
BOLD='\033[1m'
DIM='\033[2m'
NC='\033[0m' # No Color
# --- State ---
DRY_RUN=false
UNINSTALL=false
CLAWDBOT_DIR=""
WORKSPACE_DIR=""
CONFIG_FILE=""
AGENTS_FILE=""
MEMORY_DIR=""
PROVIDER=""
API_KEY=""
CHANGES_MADE=()
FILES_INDEXED=0
EXISTING_MEMORY_FILES=0
# --- Parse Args ---
for arg in "$@"; do
case "$arg" in
--dry-run) DRY_RUN=true ;;
--uninstall) UNINSTALL=true ;;
--help|-h)
echo "Clawdbot Memory System Installer v${VERSION}"
echo ""
echo "Usage:"
echo " bash install.sh Interactive install"
echo " bash install.sh --dry-run Preview changes without applying"
echo " bash install.sh --uninstall Remove memory system config"
exit 0
;;
*)
echo -e "${RED}Unknown option: $arg${NC}"
echo "Use --help for usage information."
exit 1
;;
esac
done
# --- Helper Functions ---
info() { echo -e "${BLUE}${NC} $1"; }
success() { echo -e "${GREEN}${NC} $1"; }
warn() { echo -e "${YELLOW}⚠️${NC} $1"; }
error() { echo -e "${RED}${NC} $1"; }
step() { echo -e "\n${BOLD}${MAGENTA}$1${NC}"; }
detail() { echo -e " ${DIM}$1${NC}"; }
dry() { echo -e " ${CYAN}[dry-run]${NC} $1"; }
banner() {
echo ""
echo -e "${BOLD}${MAGENTA}"
echo " ╔══════════════════════════════════════════════╗"
echo " ║ 🧠 Clawdbot Memory System v${VERSION}"
echo " ║ Never lose context to compaction again ║"
echo " ╚══════════════════════════════════════════════╝"
echo -e "${NC}"
}
# --- Detect Clawdbot Installation ---
detect_clawdbot() {
step "🔍 Detecting Clawdbot installation..."
# Check common locations
for dir in "$HOME/.clawdbot" "$HOME/.openclaw" "$HOME/.moltbot"; do
if [[ -d "$dir" ]]; then
CLAWDBOT_DIR="$dir"
break
fi
done
if [[ -z "$CLAWDBOT_DIR" ]]; then
error "Clawdbot installation not found!"
echo ""
echo " Looked in:"
echo " ~/.clawdbot"
echo " ~/.openclaw"
echo " ~/.moltbot"
echo ""
echo " Make sure Clawdbot is installed first:"
echo " https://docs.clawd.bot/getting-started"
exit 1
fi
success "Found Clawdbot at ${BOLD}${CLAWDBOT_DIR}${NC}"
# Find config file
CONFIG_FILE="${CLAWDBOT_DIR}/clawdbot.json"
if [[ ! -f "$CONFIG_FILE" ]]; then
# Try alternate names
for name in "openclaw.json" "moltbot.json"; do
if [[ -f "${CLAWDBOT_DIR}/${name}" ]]; then
CONFIG_FILE="${CLAWDBOT_DIR}/${name}"
break
fi
done
fi
if [[ ! -f "$CONFIG_FILE" ]]; then
error "Config file not found at ${CONFIG_FILE}"
echo " Run 'clawdbot doctor' first to initialize your setup."
exit 1
fi
success "Found config at ${BOLD}${CONFIG_FILE}${NC}"
# Find workspace
WORKSPACE_DIR=$(python3 -c "
import json, os
with open('${CONFIG_FILE}') as f:
cfg = json.load(f)
ws = cfg.get('agents', {}).get('defaults', {}).get('workspace', '')
if not ws:
ws = os.path.expanduser('~') + '/' + os.path.basename('${CLAWDBOT_DIR}') + '/workspace'
print(os.path.expanduser(ws))
" 2>/dev/null || echo "${CLAWDBOT_DIR}/workspace")
if [[ ! -d "$WORKSPACE_DIR" ]]; then
warn "Workspace directory not found at ${WORKSPACE_DIR}"
if $DRY_RUN; then
dry "Would create workspace directory"
else
mkdir -p "$WORKSPACE_DIR"
success "Created workspace directory"
fi
fi
success "Workspace: ${BOLD}${WORKSPACE_DIR}${NC}"
MEMORY_DIR="${WORKSPACE_DIR}/memory"
AGENTS_FILE="${WORKSPACE_DIR}/AGENTS.md"
}
# --- Check Dependencies ---
check_deps() {
step "🔧 Checking dependencies..."
# Check for jq
if ! command -v jq &>/dev/null; then
warn "jq is not installed (needed for safe JSON config patching)"
echo ""
if command -v brew &>/dev/null; then
echo -e " Install with Homebrew? ${DIM}(recommended)${NC}"
read -rp " [Y/n] " yn
yn=${yn:-Y}
if [[ "$yn" =~ ^[Yy] ]]; then
if $DRY_RUN; then
dry "Would run: brew install jq"
else
echo " Installing jq..."
brew install jq
success "jq installed"
fi
else
error "jq is required. Install it manually:"
echo " brew install jq"
echo " # or: sudo apt-get install jq"
echo " # or: https://jqlang.github.io/jq/download/"
exit 1
fi
else
error "jq is required but not installed."
echo ""
echo " Install jq:"
echo " macOS: brew install jq"
echo " Ubuntu/Debian: sudo apt-get install jq"
echo " Other: https://jqlang.github.io/jq/download/"
exit 1
fi
else
success "jq found: $(jq --version)"
fi
# Check for clawdbot CLI
if ! command -v clawdbot &>/dev/null; then
# Try openclaw or moltbot
if command -v openclaw &>/dev/null; then
alias clawdbot=openclaw
success "openclaw CLI found"
elif command -v moltbot &>/dev/null; then
alias clawdbot=moltbot
success "moltbot CLI found"
else
warn "clawdbot CLI not found in PATH"
detail "Index building will be skipped — run manually after install"
fi
else
success "clawdbot CLI found"
fi
}
# --- Check Existing Memory Files ---
check_existing() {
step "📂 Checking existing memory files..."
if [[ -d "$MEMORY_DIR" ]]; then
EXISTING_MEMORY_FILES=$(find "$MEMORY_DIR" -name "*.md" -type f 2>/dev/null | wc -l | tr -d ' ')
if [[ "$EXISTING_MEMORY_FILES" -gt 0 ]]; then
info "Found ${BOLD}${EXISTING_MEMORY_FILES}${NC} existing memory files in ${MEMORY_DIR}"
detail "These will be preserved and indexed"
else
info "Memory directory exists but is empty"
fi
else
info "No existing memory directory — will create one"
fi
# Check for MEMORY.md
if [[ -f "${WORKSPACE_DIR}/MEMORY.md" ]]; then
info "Found existing MEMORY.md — will be included in index"
fi
}
# --- Create Memory Directory & Templates ---
setup_memory_dir() {
step "📝 Setting up memory directory and templates..."
if $DRY_RUN; then
if [[ ! -d "$MEMORY_DIR" ]]; then
dry "Would create ${MEMORY_DIR}/"
fi
dry "Would copy template files to ${MEMORY_DIR}/"
return
fi
# Create memory directory
if [[ ! -d "$MEMORY_DIR" ]]; then
mkdir -p "$MEMORY_DIR"
success "Created ${MEMORY_DIR}/"
CHANGES_MADE+=("Created memory/ directory")
else
info "Memory directory already exists"
fi
# Copy templates (don't overwrite existing)
local script_dir
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local templates_dir="${script_dir}/templates"
# If running from curl, download templates
if [[ ! -d "$templates_dir" ]]; then
local base_url="https://raw.githubusercontent.com/BusyBee3333/clawdbot-memory-system/main/templates"
templates_dir=$(mktemp -d)
for tmpl in TEMPLATE-daily.md TEMPLATE-research-intel.md TEMPLATE-project-tracking.md; do
curl -sL "${base_url}/${tmpl}" -o "${templates_dir}/${tmpl}" 2>/dev/null || true
done
fi
# Copy each template if it doesn't exist
for tmpl_file in "${templates_dir}"/TEMPLATE-*.md; do
[[ -f "$tmpl_file" ]] || continue
local basename
basename=$(basename "$tmpl_file")
if [[ ! -f "${MEMORY_DIR}/${basename}" ]]; then
cp "$tmpl_file" "${MEMORY_DIR}/${basename}"
success "Added template: ${basename}"
CHANGES_MADE+=("Added ${basename}")
else
detail "Template already exists: ${basename}"
fi
done
# Create today's daily log if none exists
local today
today=$(date +%Y-%m-%d)
if [[ ! -f "${MEMORY_DIR}/${today}.md" ]]; then
cat > "${MEMORY_DIR}/${today}.md" << 'DAILY'
# Daily Log — $(date +%Y-%m-%d)
## What We Worked On
- Set up Clawdbot Memory System 🧠
## Decisions Made
- Installed persistent memory to prevent context loss during compaction
## Next Steps
- Use the memory system naturally — agent will write daily logs
- Check `memory/` directory for accumulated context over time
## Open Questions / Blockers
-
## Notable Context
Memory system installed and indexed. Agent should now persist important context
across sessions automatically.
DAILY
# Fix the date in the file
sed -i.bak "s/\$(date +%Y-%m-%d)/${today}/g" "${MEMORY_DIR}/${today}.md" 2>/dev/null && rm -f "${MEMORY_DIR}/${today}.md.bak"
success "Created today's daily log: ${today}.md"
CHANGES_MADE+=("Created daily log ${today}.md")
fi
}
# --- Choose Embedding Provider ---
choose_provider() {
step "🤖 Choose your embedding provider..."
echo ""
echo -e " ${BOLD}1)${NC} ${GREEN}OpenAI${NC} ${DIM}(recommended — fast, cheap ~\$0.02/M tokens)${NC}"
echo -e " ${BOLD}2)${NC} ${BLUE}Gemini${NC} ${DIM}(free tier available)${NC}"
echo -e " ${BOLD}3)${NC} ${YELLOW}Local${NC} ${DIM}(free, offline, slower first run)${NC}"
echo ""
read -rp " Choose [1/2/3] (default: 1): " choice
choice=${choice:-1}
case "$choice" in
1) PROVIDER="openai" ;;
2) PROVIDER="gemini" ;;
3) PROVIDER="local" ;;
*)
warn "Invalid choice, defaulting to OpenAI"
PROVIDER="openai"
;;
esac
success "Selected: ${BOLD}${PROVIDER}${NC}"
# Prompt for API key if needed
if [[ "$PROVIDER" == "openai" ]]; then
if [[ -n "${OPENAI_API_KEY:-}" ]]; then
info "Found OPENAI_API_KEY in environment"
else
echo ""
echo -e " ${DIM}OpenAI API key is needed for embeddings.${NC}"
echo -e " ${DIM}Get one at: https://platform.openai.com/api-keys${NC}"
echo -e " ${DIM}(Press Enter to skip — you can set it later as OPENAI_API_KEY)${NC}"
echo ""
read -rsp " OpenAI API Key: " API_KEY
echo ""
if [[ -n "$API_KEY" ]]; then
success "API key provided"
else
warn "No API key provided — set OPENAI_API_KEY env var before using memory search"
fi
fi
elif [[ "$PROVIDER" == "gemini" ]]; then
if [[ -n "${GEMINI_API_KEY:-}" ]]; then
info "Found GEMINI_API_KEY in environment"
else
echo ""
echo -e " ${DIM}Gemini API key is needed for embeddings.${NC}"
echo -e " ${DIM}Get one at: https://aistudio.google.com/apikey${NC}"
echo -e " ${DIM}(Press Enter to skip — you can set it later as GEMINI_API_KEY)${NC}"
echo ""
read -rsp " Gemini API Key: " API_KEY
echo ""
if [[ -n "$API_KEY" ]]; then
success "API key provided"
else
warn "No API key provided — set GEMINI_API_KEY env var before using memory search"
fi
fi
fi
}
# --- Patch clawdbot.json ---
patch_config() {
step "⚙️ Patching clawdbot.json..."
# Check if memorySearch already configured
local existing_provider
existing_provider=$(jq -r '.agents.defaults.memorySearch.provider // empty' "$CONFIG_FILE" 2>/dev/null || true)
if [[ -n "$existing_provider" ]]; then
info "memorySearch already configured (provider: ${existing_provider})"
echo ""
read -rp " Overwrite with new provider ($PROVIDER)? [y/N] " yn
yn=${yn:-N}
if [[ ! "$yn" =~ ^[Yy] ]]; then
info "Keeping existing config"
return
fi
fi
if $DRY_RUN; then
dry "Would back up ${CONFIG_FILE}${CONFIG_FILE}.pre-memory-backup"
dry "Would add memorySearch config (provider: ${PROVIDER})"
if [[ -n "$API_KEY" ]]; then
dry "Would add API key to config"
fi
return
fi
# Back up config
cp "$CONFIG_FILE" "${CONFIG_FILE}.pre-memory-backup"
success "Backed up config → ${CONFIG_FILE}.pre-memory-backup"
# Build the memorySearch config based on provider
local memory_config
case "$PROVIDER" in
openai)
memory_config='{"provider":"openai","model":"text-embedding-3-small"}'
if [[ -n "$API_KEY" ]]; then
memory_config=$(echo "$memory_config" | jq --arg key "$API_KEY" '. + {remote: {apiKey: $key}}')
fi
;;
gemini)
memory_config='{"provider":"gemini","model":"gemini-embedding-001"}'
if [[ -n "$API_KEY" ]]; then
memory_config=$(echo "$memory_config" | jq --arg key "$API_KEY" '. + {remote: {apiKey: $key}}')
fi
;;
local)
memory_config='{"provider":"local"}'
;;
esac
# Merge into config using jq (safe, non-destructive)
local tmp_file
tmp_file=$(mktemp)
jq --argjson ms "$memory_config" '
.agents.defaults.memorySearch = (
(.agents.defaults.memorySearch // {}) * $ms
)
' "$CONFIG_FILE" > "$tmp_file"
# Validate the result is valid JSON
if jq empty "$tmp_file" 2>/dev/null; then
mv "$tmp_file" "$CONFIG_FILE"
success "Added memorySearch config (provider: ${PROVIDER})"
CHANGES_MADE+=("Patched clawdbot.json with memorySearch (${PROVIDER})")
else
rm -f "$tmp_file"
error "Failed to patch config — restoring backup"
cp "${CONFIG_FILE}.pre-memory-backup" "$CONFIG_FILE"
exit 1
fi
}
# --- Patch AGENTS.md ---
patch_agents() {
step "📋 Updating AGENTS.md with memory instructions..."
# Check if already patched
if [[ -f "$AGENTS_FILE" ]] && grep -q "memory_search\|Daily memory log\|Pre-compaction flush" "$AGENTS_FILE" 2>/dev/null; then
info "AGENTS.md already contains memory instructions"
return
fi
if $DRY_RUN; then
dry "Would append memory instructions to ${AGENTS_FILE}"
return
fi
# Get the patch content
local script_dir
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local patch_file="${script_dir}/config/agents-memory-patch.md"
local patch_content
if [[ -f "$patch_file" ]]; then
patch_content=$(cat "$patch_file")
else
# Download if running from curl
patch_content=$(curl -sL "https://raw.githubusercontent.com/BusyBee3333/clawdbot-memory-system/main/config/agents-memory-patch.md" 2>/dev/null || true)
fi
if [[ -z "$patch_content" ]]; then
# Fallback: inline the patch
patch_content='
## Memory System (auto-added by clawdbot-memory-system installer)
### Mandatory Memory Recall
Before answering ANY question about prior work, decisions, or context from previous sessions:
1. Use `memory_search` to find relevant memories
2. Check today'\''s daily log: `memory/YYYY-MM-DD.md`
3. Check yesterday'\''s log if today'\''s is sparse
4. Only say "I don'\''t recall" if memory search returns nothing
### Daily Memory Log
- Write to `memory/YYYY-MM-DD.md` throughout the session
- Log: decisions, preferences, project progress, action items, blockers
- Be specific — future-you needs exact details, not vague summaries
### Pre-Compaction Flush
When you sense a session is getting long or receive a compaction warning:
- Write ALL important unsaved context to today'\''s daily log
- Include: what we were working on, where we left off, any pending decisions
- This is your safety net against amnesia
### Research Intel System
For ongoing research/monitoring projects:
- Store in: `memory/{project}-research-intel.md`
- Current week'\''s detailed intel at TOP
- Compressed 1-3 sentence summaries of previous weeks at BOTTOM
- Check for active research intel files on strategic questions
### Git Backup
End of session: `cd ~/.clawdbot/workspace && git add -A && git commit -m "session backup" && git push`
'
fi
if [[ -f "$AGENTS_FILE" ]]; then
echo "" >> "$AGENTS_FILE"
echo "$patch_content" >> "$AGENTS_FILE"
else
echo "$patch_content" > "$AGENTS_FILE"
fi
success "Added memory instructions to AGENTS.md"
CHANGES_MADE+=("Updated AGENTS.md with memory habits")
}
# --- Build Index ---
build_index() {
step "🔨 Building memory search index..."
if $DRY_RUN; then
dry "Would run: clawdbot memory index --verbose"
dry "Would run: clawdbot memory status --deep"
return
fi
if ! command -v clawdbot &>/dev/null; then
warn "clawdbot CLI not in PATH — skipping index build"
detail "Run manually: clawdbot memory index --verbose"
return
fi
echo ""
info "Indexing memory files..."
echo -e "${DIM}"
if clawdbot memory index --verbose 2>&1; then
echo -e "${NC}"
success "Index built successfully"
else
echo -e "${NC}"
warn "Index build had issues (this may be normal on first run)"
detail "Check: clawdbot memory status --deep"
fi
echo ""
info "Verifying installation..."
echo -e "${DIM}"
if clawdbot memory status --deep 2>&1; then
echo -e "${NC}"
success "Memory system verified"
else
echo -e "${NC}"
warn "Status check had warnings (embedding provider may need API key)"
fi
# Count indexed files
FILES_INDEXED=$(find "$MEMORY_DIR" -name "*.md" -type f 2>/dev/null | wc -l | tr -d ' ')
if [[ -f "${WORKSPACE_DIR}/MEMORY.md" ]]; then
FILES_INDEXED=$((FILES_INDEXED + 1))
fi
}
# --- Print Summary ---
print_summary() {
echo ""
echo -e "${BOLD}${GREEN}"
echo " ╔══════════════════════════════════════════════╗"
echo " ║ 🎉 Installation Complete! ║"
echo " ╚══════════════════════════════════════════════╝"
echo -e "${NC}"
if [[ ${#CHANGES_MADE[@]} -gt 0 ]]; then
echo -e " ${BOLD}Changes made:${NC}"
for change in "${CHANGES_MADE[@]}"; do
echo -e " ${GREEN}${NC} ${change}"
done
echo ""
fi
if [[ "$EXISTING_MEMORY_FILES" -gt 0 ]]; then
echo -e " ${BOLD}Migration:${NC}"
echo -e " 📁 ${EXISTING_MEMORY_FILES} existing memory files preserved and indexed"
echo ""
fi
echo -e " ${BOLD}Index:${NC}"
echo -e " 📊 ${FILES_INDEXED} memory files indexed"
echo -e " 🤖 Provider: ${PROVIDER}"
echo ""
echo -e " ${BOLD}${CYAN}Next steps:${NC}"
echo ""
echo -e " ${BOLD}1.${NC} Restart the gateway to apply config changes:"
echo -e " ${DIM}clawdbot gateway restart${NC}"
echo ""
echo -e " ${BOLD}2.${NC} Start chatting! Your agent will now:"
echo -e " • Write daily logs to memory/${DIM}YYYY-MM-DD${NC}.md"
echo -e " • Search memories before answering about prior work"
echo -e " • Flush context before compaction"
echo ""
echo -e " ${BOLD}3.${NC} Verify anytime with:"
echo -e " ${DIM}clawdbot memory status --deep${NC}"
echo ""
if [[ -n "$API_KEY" ]]; then
echo -e " ${YELLOW}⚠️ Your API key was saved to clawdbot.json.${NC}"
echo -e " ${DIM} Alternatively, set it as an env var and remove from config.${NC}"
echo ""
fi
echo -e " ${DIM}Problems? See: https://github.com/BusyBee3333/clawdbot-memory-system#troubleshooting${NC}"
echo ""
}
# --- Print Dry Run Summary ---
print_dry_summary() {
echo ""
echo -e "${BOLD}${CYAN}"
echo " ╔══════════════════════════════════════════════╗"
echo " ║ 📋 Dry Run Summary ║"
echo " ╚══════════════════════════════════════════════╝"
echo -e "${NC}"
echo ""
echo -e " No changes were made. Run without ${CYAN}--dry-run${NC} to apply."
echo ""
}
# --- Uninstall ---
do_uninstall() {
banner
step "🗑️ Uninstalling Clawdbot Memory System..."
detect_clawdbot
echo ""
warn "This will:"
echo " • Remove memorySearch config from clawdbot.json"
echo " • Remove memory instructions from AGENTS.md"
echo ""
echo -e " ${BOLD}This will NOT delete your memory/ files.${NC}"
echo " Your memories are safe — only the config is removed."
echo ""
read -rp " Continue? [y/N] " yn
yn=${yn:-N}
if [[ ! "$yn" =~ ^[Yy] ]]; then
info "Cancelled"
exit 0
fi
# Remove memorySearch from config
if [[ -f "$CONFIG_FILE" ]] && jq -e '.agents.defaults.memorySearch' "$CONFIG_FILE" &>/dev/null; then
cp "$CONFIG_FILE" "${CONFIG_FILE}.pre-uninstall-backup"
local tmp_file
tmp_file=$(mktemp)
jq 'del(.agents.defaults.memorySearch)' "$CONFIG_FILE" > "$tmp_file"
if jq empty "$tmp_file" 2>/dev/null; then
mv "$tmp_file" "$CONFIG_FILE"
success "Removed memorySearch from config"
else
rm -f "$tmp_file"
error "Failed to patch config"
fi
else
info "No memorySearch config found"
fi
# Remove memory instructions from AGENTS.md
if [[ -f "$AGENTS_FILE" ]]; then
if grep -q "Memory System (auto-added by clawdbot-memory-system installer)" "$AGENTS_FILE" 2>/dev/null; then
cp "$AGENTS_FILE" "${AGENTS_FILE}.pre-uninstall-backup"
# Remove everything from the memory system header to the end of that section
sed -i.bak '/## Memory System (auto-added by clawdbot-memory-system installer)/,$ { /## Memory System (auto-added by clawdbot-memory-system installer)/d; /^## [^M]/!d; }' "$AGENTS_FILE" 2>/dev/null
# Simpler approach: use python to remove the section
python3 -c "
import re
with open('${AGENTS_FILE}') as f:
content = f.read()
# Remove the auto-added section
pattern = r'\n*## Memory System \(auto-added by clawdbot-memory-system installer\).*'
content = re.sub(pattern, '', content, flags=re.DOTALL)
with open('${AGENTS_FILE}', 'w') as f:
f.write(content.rstrip() + '\n')
" 2>/dev/null || true
rm -f "${AGENTS_FILE}.bak"
success "Removed memory instructions from AGENTS.md"
else
info "No auto-added memory instructions found in AGENTS.md"
fi
fi
echo ""
success "Uninstall complete"
echo ""
echo -e " ${DIM}Your memory/ files were preserved.${NC}"
echo -e " ${DIM}Config backups saved as .pre-uninstall-backup files.${NC}"
echo -e " ${DIM}Run 'clawdbot gateway restart' to apply changes.${NC}"
echo ""
exit 0
}
# ============================================================================
# Main Flow
# ============================================================================
# Handle uninstall
if $UNINSTALL; then
do_uninstall
fi
# Banner
banner
if $DRY_RUN; then
echo -e " ${CYAN}${BOLD}Running in dry-run mode — no changes will be made${NC}"
echo ""
fi
# Step 1: Detect Clawdbot
detect_clawdbot
# Step 2: Check dependencies
check_deps
# Step 3: Check existing files
check_existing
# Step 4: Choose provider
choose_provider
# Step 5: Set up memory directory
setup_memory_dir
# Step 6: Patch clawdbot.json
patch_config
# Step 7: Patch AGENTS.md
patch_agents
# Step 8: Build index
build_index
# Step 9: Summary
if $DRY_RUN; then
print_dry_summary
else
print_summary
fi
]]>

View File

@ -0,0 +1,16 @@
# Daily Log — YYYY-MM-DD
## What We Worked On
-
## Decisions Made
-
## Next Steps
-
## Open Questions / Blockers
-
## Notable Context
(anything future-me needs to know that isn't captured above)

View File

@ -0,0 +1,15 @@
# {Project Name} Progress
## Current Status
- **Stage:**
- **Last Update:**
- **Blockers:**
## Recent Work
-
## Decisions Log
-
## Next Steps
1.

View File

@ -0,0 +1,16 @@
# {Project Name} Research Intel
## Week of {Date} (Scan #1)
### Key Findings
-
### Market Signals
-
### Action Items
1.
---
## Previous Weeks Summary
(compressed 1-3 sentence summaries of older weeks go here)