# 🛡️ Hardened Memory Playbook — Unified Setup Guide

*OpenClaw Memory v2 — Built-in memory-core with local embeddings + 3-layer defense system*

*Works on any machine from 8GB to 36GB+. Hardware-specific tuning notes included.*

---

## Prerequisites

- macOS or Linux machine with 8GB+ RAM
- OpenClaw installed and running
- Node.js 18+
- ~600MB disk space for the embedding model

---

## Step 1: Configure memory-core

Edit your `~/.clawdbot/openclaw.json` (or use `gateway config.patch`):

```json
{
  "memorySearch": {
    "provider": "local",
    "localModel": "embeddinggemma-300M"
  },
  "compaction": {
    "memoryFlush": {
      "enabled": true,
      "softThresholdTokens": 10000
    }
  }
}
```

**⚠️ Critical:** `softThresholdTokens` is NOT an absolute threshold. The formula is:

```
Flush at: contextWindow - reserveTokensFloor - softThresholdTokens
```

`10000` means "flush when within 10K tokens of the compaction boundary." Do NOT set this to a large number like 170000 — that causes compaction loops.

---

## Step 2: Set Up Memory Files

```bash
mkdir -p ~/clawd/memory/bank

# Core files
touch ~/clawd/MEMORY.md
touch ~/clawd/memory/active-thread.md

# Bank files (durable, never compacted)
touch ~/clawd/memory/bank/world.md
touch ~/clawd/memory/bank/experience.md
touch ~/clawd/memory/bank/opinions.md
touch ~/clawd/memory/bank/entities.md
```

Add your identity and key facts to `MEMORY.md`. This file is always available to the agent.

> **8GB machines:** Keep bank files under 5KB each. Be selective about what you retain — every KB in memory files is a KB the embedding model processes.

---

## Step 3: Install the config-guard Hook (Layer 1)

Create `~/clawd/hooks/config-guard.js`:

```javascript
// config-guard.js — auto-repairs critical config on gateway startup
// Runs as a pure JS hook, no AI needed — first line of defense

const fs = require('fs');
const path = require('path');

const CONFIG_PATH = path.join(process.env.HOME, '.clawdbot', 'openclaw.json');

const REQUIRED = {
  'memorySearch.provider': 'local',
  'compaction.memoryFlush.enabled': true,
  'compaction.memoryFlush.softThresholdTokens': 10000,
  'gateway.bind': 'tailnet',
};

function getNestedValue(obj, keyPath) {
  return keyPath.split('.').reduce((o, k) => o && o[k], obj);
}

function setNestedValue(obj, keyPath, value) {
  const keys = keyPath.split('.');
  const last = keys.pop();
  const target = keys.reduce((o, k) => {
    if (!o[k]) o[k] = {};
    return o[k];
  }, obj);
  target[last] = value;
}

try {
  const config = JSON.parse(fs.readFileSync(CONFIG_PATH, 'utf8'));
  let patched = false;

  for (const [key, expected] of Object.entries(REQUIRED)) {
    const actual = getNestedValue(config, key);
    if (actual !== expected) {
      console.log(`[config-guard] Repairing ${key}: ${actual} → ${expected}`);
      setNestedValue(config, key, expected);
      patched = true;
    }
  }

  if (patched) {
    fs.writeFileSync(CONFIG_PATH, JSON.stringify(config, null, 2));
    console.log('[config-guard] Config repaired.');
  } else {
    console.log('[config-guard] All values correct.');
  }
} catch (e) {
  console.error('[config-guard] Error:', e.message);
}
```

Register it in your gateway hooks configuration so it runs on every startup.

---

## Step 4: Create verify-config.sh (Layer 2)

```bash
#!/bin/bash
# verify-config.sh — validates 8 critical config values
CONFIG="$HOME/.clawdbot/openclaw.json"
PLIST="$HOME/Library/LaunchAgents/ai.openclaw.gateway.plist"
PASS=0; FAIL=0

check() {
  local key="$1" expected="$2"
  actual=$(python3 -c "
import json
c = json.load(open('$CONFIG'))
keys = '$key'.split('.')
v = c
for k in keys:
    v = v.get(k, {})
print(v)
  " 2>/dev/null)
  if [ "$actual" = "$expected" ]; then
    echo "  ✅ $key = $actual"
    ((PASS++))
  else
    echo "  ❌ $key = $actual (expected: $expected)"
    ((FAIL++))
  fi
}

echo "🔍 Config Validation"
echo ""
echo "Core Settings:"
check "memorySearch.provider" "local"
check "compaction.memoryFlush.enabled" "True"
check "compaction.memoryFlush.softThresholdTokens" "10000"
check "gateway.bind" "tailnet"

echo ""
echo "LaunchAgent:"
if [ -f "$PLIST" ]; then
  if grep -q "GROQ_API_KEY" "$PLIST" 2>/dev/null; then
    echo "  ✅ GROQ_API_KEY in LaunchAgent"
    ((PASS++))
  else
    echo "  ❌ GROQ_API_KEY missing from LaunchAgent"
    ((FAIL++))
  fi
  if grep -q "KeepAlive" "$PLIST" 2>/dev/null; then
    echo "  ✅ KeepAlive enabled"
    ((PASS++))
  else
    echo "  ❌ KeepAlive not set"
    ((FAIL++))
  fi
else
  echo "  ⚠️  LaunchAgent plist not found (skip if not on macOS)"
fi

echo ""
echo "Cron Jobs:"
if crontab -l 2>/dev/null | grep -q "auto-unlock"; then
  echo "  ✅ auto-unlock.sh cron installed"
  ((PASS++))
else
  echo "  ❌ auto-unlock.sh cron missing"
  ((FAIL++))
fi
if crontab -l 2>/dev/null | grep -q "cleanup-sessions"; then
  echo "  ✅ cleanup-sessions.sh cron installed"
  ((PASS++))
else
  echo "  ❌ cleanup-sessions.sh cron missing"
  ((FAIL++))
fi

echo ""
echo "Results: $PASS passed, $FAIL failed"
[ "$FAIL" -eq 0 ] && echo "✅ All checks passed" || echo "❌ Fix failures above"
```

---

## Step 5: Set Up auto-unlock.sh (Layer 3a)

```bash
#!/bin/bash
# auto-unlock.sh — clears stale session locks older than 5 minutes
find "$HOME/.clawdbot/sessions" -name "*.lock" -mmin +5 -delete 2>/dev/null
```

Add to crontab:
```bash
crontab -e
# Add: */5 * * * * ~/clawd/scripts/auto-unlock.sh
```

---

## Step 6: Set Up cleanup-sessions.sh (Layer 3b)

```bash
#!/bin/bash
# cleanup-sessions.sh — prevents sessions.json bloat
# We learned this the hard way: 55MB from 1,490 accumulated sessions
SESSIONS="$HOME/.clawdbot/sessions.json"

if [ ! -f "$SESSIONS" ]; then exit 0; fi

SIZE=$(stat -f%z "$SESSIONS" 2>/dev/null || stat --format=%s "$SESSIONS" 2>/dev/null)
if [ "$SIZE" -gt 1048576 ]; then
  echo "sessions.json is $(($SIZE/1048576))MB — cleaning..."
  python3 -c "
import json, time
with open('$SESSIONS') as f:
    data = json.load(f)
cutoff = time.time() - (7 * 86400)
if isinstance(data, list):
    data = [s for s in data if s.get('updatedAt', 0) > cutoff]
with open('$SESSIONS', 'w') as f:
    json.dump(data, f)
print(f'Cleaned to {len(data)} sessions')
  "
fi
```

Add to crontab:
```bash
crontab -e
# Add: 0 4 * * 0 ~/clawd/scripts/cleanup-sessions.sh
```

---

## Step 7: Create BOOT.md (AI Protocol)

Place in your clawd root (`~/clawd/BOOT.md`):

```markdown
# BOOT.md — Lean Startup Checklist

1. Run verify-config.sh — fix any failures before proceeding
2. Read memory/active-thread.md — current task + next steps
3. Check context % — stay under 30% after recovery
4. Resume work based on active-thread context
5. Use memory_search ON-DEMAND — do not bulk-load files
6. If active-thread.md is missing/stale, ask what's current
```

Reference this in your agent's AGENTS.md so it follows the protocol after every compaction.

> **8GB machines:** Target 25% context instead of 30% to leave more room for conversation. Limit memory_search to 3-5 results per query.

---

## Step 8: Configure AGENTS.md Recovery Protocol

Add to your AGENTS.md:

```markdown
## Post-Compaction Recovery (MANDATORY — BE LEAN)
1. Read memory/active-thread.md ONLY
2. Resume work immediately
3. Do NOT bulk-read daily logs or bank files
4. Use memory_search ON-DEMAND only
5. Stay under 30% context after recovery
```

---

## Step 9: Verify Everything

```bash
# 1. Full config validation
~/clawd/scripts/verify-config.sh

# 2. Test memory search (in a conversation, ask the agent to search)
# "Search memory for [topic]" → should use memory_search tool

# 3. Verify crons
crontab -l | grep -E "auto-unlock|cleanup-sessions"

# 4. Check memory file structure
ls -la ~/clawd/memory/active-thread.md ~/clawd/memory/bank/

# 5. Check embedding model (gateway logs show loading on first use)
```

---

## Hardware Tuning Notes

The architecture is identical regardless of RAM. These are the only differences:

| Factor | 8GB | 16GB+ | 36GB+ |
|--------|-----|-------|-------|
| Bank file size | Under 5KB each | Up to 20KB each | Unlimited |
| Context budget after recovery | 25% | 30% | 30% |
| memory_search results per query | 3-5 | 5-10 | 10+ |
| Embedding model first-load time | 10-30s | 5-10s | <5s |
| Cron defense layers | All 3 | All 3 | All 3 |
| Config-guard values | Identical | Identical | Identical |
| Recovery protocol | Identical | Identical | Identical |

**The defense stack is the same everywhere.** More RAM just means more headroom for memory files and search results.

---

## Troubleshooting

| Problem | Fix |
|---------|-----|
| `SQLITE_ERROR` on startup | Remove old memory-engine plugin if still installed. memory-core handles everything. |
| Memory search returns nothing | Check `memorySearch.provider` is `"local"`. Restart gateway. |
| Sessions file is huge | Run `cleanup-sessions.sh` manually. Check cron is installed. |
| Agent loads too much on startup | Verify BOOT.md and AGENTS.md have lean recovery protocol. |
| Config keeps reverting | Check config-guard hook is registered and running on startup. |
| Embedding model won't load | Ensure ~600MB disk space. Check gateway logs for download errors. |
| Embedding model slow to load | Normal on 8GB — first load takes 10-30s. Subsequent searches are fast. |
| Compaction loops | Check softThresholdTokens is 10000, NOT a large number. Run config-guard. |
| Gateway crashes on startup | Check sessions.json size. Run cleanup-sessions.sh. Remove old plugin configs. |
| High memory pressure / swap (8GB) | Curate bank files. Kill heavy processes during AI sessions. |

---

*Part of the [Hardened Production Memory](https://autoedu.ai/resources/openclaw-memory-engine) guide by Steve AI 🧠 & VibeX ⚡*
