Add memory folder with daily logs
This commit is contained in:
parent
9556f03ff4
commit
2cb62c1a49
32
memory/2026-01-14.md
Normal file
32
memory/2026-01-14.md
Normal file
@ -0,0 +1,32 @@
|
||||
# 2026-01-14 - Memory Log
|
||||
|
||||
## Context
|
||||
- Date: 2026-01-14
|
||||
- User time: America/New_York (EST)
|
||||
|
||||
## Events & Conversations
|
||||
|
||||
### ~18:25 UTC (23:15 EST) - Discord #general
|
||||
- User asked what we were talking about previously
|
||||
- I had no access to previous conversation history (fresh context window)
|
||||
- User asked about GOG configuration
|
||||
- **GOG confirmed configured** with 3 accounts:
|
||||
- jake@burtonmethod.com (calendar, contacts, docs, drive, gmail, people, sheets, tasks)
|
||||
- jake@localbosses.org (calendar, contacts, docs, drive, gmail, people, sheets, tasks)
|
||||
- jakeshore98@gmail.com (calendar, contacts, docs, drive, gmail, people, sheets, tasks)
|
||||
- All OAuth authenticated on 2026-01-12
|
||||
|
||||
### ~18:25 UTC (23:25 EST) - Memory System Issue
|
||||
- User pointed out memory/ directory and MEMORY.md didn't exist
|
||||
- Asked why AGENTS.md guidance wasn't being followed
|
||||
- I acknowledged I'm a fresh session and can't speak to previous instances
|
||||
- **Action taken**: Created memory/ directory, MEMORY.md, and today's log
|
||||
- User wants to know what else hasn't been followed
|
||||
|
||||
## Questions / Open Items
|
||||
- What was the previous conversation topic before this session?
|
||||
- What else should have been done according to AGENTS.md that wasn't?
|
||||
|
||||
## Decisions Made
|
||||
- Memory system now active going forward
|
||||
- Daily logs will be written to memory/YYYY-MM-DD.md
|
||||
308
memory/2026-01-15.md
Normal file
308
memory/2026-01-15.md
Normal file
@ -0,0 +1,308 @@
|
||||
# 2026-01-15 - Memory Log
|
||||
|
||||
## Context
|
||||
- Date: 2026-01-15
|
||||
- User time: America/New_York (EST)
|
||||
|
||||
## Events & Conversations
|
||||
|
||||
### ~11:40 UTC (6:40 EST) - Discord #general
|
||||
- User asked what happened with the video
|
||||
- Read Discord history — found video edit was from last night (Jan 14)
|
||||
- **Problem**: Used file size as proxy for brightness instead of actual luminance analysis
|
||||
- **Fixed**: Redid clip using proper luminance analysis with image tool
|
||||
- Found transition at ~20.5s, created new clip: `claude_dark_light_transition_v2.mp4`
|
||||
|
||||
### ~11:49 UTC (6:49 EST) - Discord #general
|
||||
- User asked for fun + productive things we can do today
|
||||
- Suggested options based on USER.md interests:
|
||||
- Veo 3 → Discord integration
|
||||
- Mushroom app prototype
|
||||
- GHL ↔ CallTools bi-directional sync
|
||||
- CRE onboarding flow
|
||||
- Das promo script workflow
|
||||
- BTC macro catalyst tracker
|
||||
|
||||
### ~11:51 UTC (6:51 EST) - Discord #general
|
||||
- User mentioned "ohhh there is a scraper for reonomy, do you remember that?"
|
||||
- Searched memory — found no mention of Reonomy scraper
|
||||
- Found extensive Reonomy scraper files in workspace (v1-v11)
|
||||
- Read REONOMY-SCRAPER-MEMORY.md — documented v9 owner tab extractor approach
|
||||
- **Now captured in memory** — will persist going forward
|
||||
|
||||
### ~11:53 UTC (6:53 EST) - Discord #general
|
||||
- User asked me to explain current steps Reonomy scraper goes through
|
||||
- Read `reonomy-scraper-v9-owner-tab.js`
|
||||
- Documented full step-by-step workflow:
|
||||
1. Launch browser (headless/visible)
|
||||
2. Login to Reonomy
|
||||
3. Navigate to search
|
||||
4. Perform location search
|
||||
5. Extract property IDs from search results
|
||||
6. For each property: click → wait 8s → extract data → go back
|
||||
7. Save JSON results
|
||||
8. Cleanup
|
||||
- Noted: No Google Sheets export in v9, only JSON output
|
||||
|
||||
### ~11:55 UTC (6:55 EST) - Discord #general
|
||||
- User asked if direct URL construction is better than input search
|
||||
- **Answer: Yes, objectively better** if Reonomy supports URL parameters
|
||||
- Explained why: faster, more reliable, more scalable, easier debugging
|
||||
- Open question: Can Reonomy route search terms directly via URL?
|
||||
- Suggested investigating URL routing patterns to confirm
|
||||
|
||||
### ~12:06 UTC (7:06 EST) - Discord #general
|
||||
- User found new CLI: **agent-browser** by Vercel Labs
|
||||
- Cloned repo: `https://github.com/vercel-labs/agent-browser`
|
||||
- Installed globally via npm: `npm install -g agent-browser`
|
||||
- Installed Chromium browser binary via: `agent-browser install`
|
||||
- Tested basic commands:
|
||||
- `agent-browser open example.com` — opened and loaded page
|
||||
- `agent-browser snapshot -i` — returned interactive elements with refs
|
||||
- `agent-browser snapshot` — full accessibility tree
|
||||
- `agent-browser close` — closed browser
|
||||
- Reviewed documentation: README.md, SKILL.md, AGENTS.md, package.json
|
||||
|
||||
## Tool Learned: agent-browser
|
||||
|
||||
### What It Is
|
||||
- **Headless browser automation CLI for AI agents**
|
||||
- Fast Rust CLI with Node.js daemon (client-daemon architecture)
|
||||
- Uses Playwright as browser engine (Chromium by default, supports Firefox/WebKit)
|
||||
- Designed specifically for AI agents to interact with web pages
|
||||
|
||||
### Key Features
|
||||
|
||||
**1. Ref-Based Navigation (AI-Friendly)**
|
||||
```
|
||||
agent-browser snapshot -i # Returns elements with deterministic refs: @e1, @e2, @e3
|
||||
agent-browser click @e1 # Use refs to interact (no DOM re-query needed)
|
||||
```
|
||||
- **Why refs**: Deterministic, fast, optimal for LLMs
|
||||
|
||||
**2. Semantic Locators**
|
||||
```bash
|
||||
agent-browser find role button click --name "Submit"
|
||||
agent-browser find text "Sign In" click
|
||||
agent-browser find label "Email" fill "test@test.com"
|
||||
```
|
||||
- By ARIA role, text content, label, placeholder, alt text, data-testid
|
||||
|
||||
**3. Complete Browser Automation**
|
||||
- Navigation: open, back, forward, reload
|
||||
- Interactions: click, dblclick, type, fill, hover, drag, upload
|
||||
- Form handling: check/uncheck, select dropdowns
|
||||
- Screenshots: full page or viewport
|
||||
- PDF export
|
||||
- JavaScript evaluation
|
||||
|
||||
**4. Advanced Features**
|
||||
- **Sessions**: Parallel isolated browser instances
|
||||
- **State save/load**: Save auth state for reuse (skip login flows)
|
||||
- **Network routing**: Intercept/mock requests, block URLs
|
||||
- **Headers support**: HTTP Basic auth, bearer tokens (scoped by origin)
|
||||
- **Storage management**: Cookies, localStorage, sessionStorage
|
||||
- **Debugging**: Traces, console logs, error logs
|
||||
- **CDP mode**: Connect to existing Chrome/Electron apps
|
||||
- **Streaming**: WebSocket-based browser preview for "pair browsing"
|
||||
|
||||
**5. JSON Output**
|
||||
```bash
|
||||
agent-browser snapshot -i --json # Machine-readable for agents
|
||||
agent-browser get text @e1 --json
|
||||
```
|
||||
|
||||
**6. Platform Support**
|
||||
- Native Rust binaries for: macOS (ARM64/x64), Linux (ARM64/x64), Windows (x64)
|
||||
- Falls back to Node.js if native binary unavailable
|
||||
|
||||
### Installation
|
||||
```bash
|
||||
npm install -g agent-browser
|
||||
agent-browser install # Downloads Chromium
|
||||
```
|
||||
|
||||
### Core Workflow
|
||||
```bash
|
||||
1. agent-browser open <url> # Navigate
|
||||
2. agent-browser snapshot -i # Get interactive elements + refs
|
||||
3. agent-browser click @e1 / fill @e2 "text" # Interact using refs
|
||||
4. Re-snapshot after page changes
|
||||
5. agent-browser close # Done
|
||||
```
|
||||
|
||||
### Comparison to Puppeteer (Current Reonomy Scraper)
|
||||
|
||||
| Aspect | Puppeteer | agent-browser |
|
||||
|--------|-----------|---------------|
|
||||
| **Speed** | Slower (pure JS) | Faster (Rust CLI + Playwright) |
|
||||
| **Stability** | Common timeout issues | More robust (Playwright engine) |
|
||||
| **Refs** | No (manual selectors) | Yes (deterministic @e1, @e2) |
|
||||
| **Semantic locators** | No | Yes (role, text, label) |
|
||||
| **Sessions** | Single instance | Parallel isolated sessions |
|
||||
| **State persistence** | Manual | Built-in (state save/load) |
|
||||
| **Network interception** | Limited | Full routing/mocking |
|
||||
| **AI integration** | Manual | Designed for LLMs |
|
||||
| **CLI speed** | Node startup | Rust CLI (fast) |
|
||||
|
||||
### Why agent-browser Is Better for Reonomy Scraper
|
||||
|
||||
1. **Ref-based navigation** — Snapshot once, use refs for all interactions (faster, less brittle)
|
||||
2. **Semantic locators** — Find elements by role/text/label instead of fragile CSS selectors
|
||||
3. **State persistence** — Save login state once, reuse across scrapes (no repeated auth)
|
||||
4. **Sessions** — Run multiple scrapers in parallel (different locations, same time)
|
||||
5. **Faster daemon** — Rust CLI stays running, commands execute instantly
|
||||
6. **Better wait handling** — `wait --text`, `wait --url`, `wait --load networkidle`
|
||||
|
||||
### ~12:19 UTC (7:19 EST) - Discord #general
|
||||
- User wants research on direct URL construction for advanced search with phone + email filters
|
||||
- User asked: "Let me know if there is anything else I can do for you to help before you do your investigation"
|
||||
|
||||
### ~12:28 UTC (7:28 EST) - Discord #general
|
||||
- User asked "how's it going?"
|
||||
- Started investigation into Reonomy URL construction
|
||||
|
||||
**Reonomy URL Research Started**:
|
||||
- Opened Reonomy in browser: https://app.reonomy.com/#!/login
|
||||
- Navigated to search page: https://app.reonomy.com/!/home#!/search
|
||||
- Attempted to inspect URL patterns and filter mechanisms
|
||||
- Reviewed Help Center for search documentation
|
||||
|
||||
**Key Finding from Help Center**:
|
||||
From article `3688399-can-i-search-by-type-of-ownership-information`:
|
||||
> "The Ownership tab in our search filters allows you to search by Owner Contact Information that Includes Phone Number, Includes Email Address or Includes Mailing Address."
|
||||
|
||||
**Confirmed Filters**:
|
||||
- ✅ "Includes Phone Number" - Filter for properties with phone contacts
|
||||
- ✅ "Includes Email Address" - Filter for properties with email contacts
|
||||
- ✅ "Includes Mailing Address" - Filter for properties with mailing address
|
||||
|
||||
**Known URL Patterns** (from previous research):
|
||||
```
|
||||
https://app.reonomy.com/#!/search/{search-id} # Search page
|
||||
https://app.reonomy.com/#!/property/{property-id} # Property page
|
||||
https://app.reonomy.com/#!/search/{search-id}/property/{id}/ownership # Ownership page (with contact info)
|
||||
```
|
||||
|
||||
**Open Questions**:
|
||||
- ❓ Can search parameters be passed directly in URL? (e.g., `/#!/search?q=eatontown+nj`)
|
||||
- ❓ Can filters be encoded in URL? (e.g., `?phone=true&email=true`)
|
||||
- ❓ Do filters generate shareable URLs?
|
||||
- ❓ Does Reonomy use query strings or hash-based routing only?
|
||||
|
||||
**Research Documented**: `/Users/jakeshore/.clawdbot/workspace/reonomy-url-research.md`
|
||||
|
||||
## Questions / Open Items
|
||||
- Should we migrate Reonomy scraper from Puppeteer to agent-browser?
|
||||
- Does Reonomy support URL-based search parameters (to skip input typing)?
|
||||
- **NEW**: What is the exact URL pattern for filtered search with phone + email?
|
||||
|
||||
## Decisions Made
|
||||
- agent-browser installed and tested
|
||||
- Reonomy scraper v9 workflow documented
|
||||
- Video clip redone with proper luminance analysis
|
||||
- Reonomy URL research initiated - help center confirms filters exist, but URL pattern unknown
|
||||
|
||||
### ~12:46 UTC (7:46 EST) - Discord #general
|
||||
- User provided exact URLs and CSS selector for phone numbers!
|
||||
|
||||
**What User Provided**:
|
||||
- **Search URL (with phone+email filters)**:
|
||||
```
|
||||
https://app.reonomy.com/#!/search/504a2d13-d88f-4213-9ac6-a7c8bc7c20c6
|
||||
```
|
||||
The search ID (`504a2d13-d88f-4213-9ac6-a7c8bc7c20c6`) encodes: phone + email filters applied.
|
||||
|
||||
- **Property Ownership URLs** (examples):
|
||||
```
|
||||
https://app.reonomy.com/#!/search/504a2d13-d88f-4213-9ac6-a7c8bc7c20c6/property/2b370b6a-7461-5b2c-83be-a59b84788125/ownership
|
||||
https://app.reonomy.com/#!/search/504a2d13-d88f-4213-9ac6-a7c8bc7c20c6/property/eac231fb-2e3c-4fe9-8231-fb2e3cafe9c9/ownership
|
||||
https://app.reonomy.com/#!/search/504a2d13-d88f-4213-9ac6-a7c8bc7c20c6/property/b6222331-c1e5-4e4c-a223-31c1e59e4c0b/ownership
|
||||
https://app.reonomy.com/#!/search/504a2d13-d88f-4213-9ac6-a7c8bc7c20c6/property/988d9810-6cf5-5fda-9af3-7715de381fb2/ownership
|
||||
```
|
||||
|
||||
- **Phone number CSS selector**:
|
||||
```css
|
||||
p.MuiTypography-root.jss1797.jss1798.MuiTypography-body2
|
||||
```
|
||||
(Same class for residential properties)
|
||||
|
||||
- **Goal**: Collect data from BOTH "Builder and Lot" AND "Owner" tabs
|
||||
|
||||
**Investigation Completed** via agent-browser:
|
||||
- ✅ Successfully logged in to Reonomy
|
||||
- ✅ Confirmed OAuth redirect works with encoded redirect_uri
|
||||
- ✅ Confirmed direct ownership URL access works (bypasses need for clicking property cards)
|
||||
- ✅ Search results confirmed to display property cards
|
||||
|
||||
**Key Findings**:
|
||||
- ✅ **No URL parameters needed** — Search ID from filtered search encodes: phone + email filters
|
||||
- ✅ **One-time capture** — Perform filtered search once, capture search ID, reuse for all properties
|
||||
- ✅ **Direct ownership URLs work** — `/search/{id}/property/{id}/ownership` pattern confirmed
|
||||
|
||||
**How to use**:
|
||||
1. Perform search with filters manually (one time)
|
||||
2. Capture search ID from URL
|
||||
3. Use that search ID for all subsequent property ownership URLs
|
||||
4. No need to construct URLs — just append property IDs to the base search ID path
|
||||
|
||||
**Full documentation**: `/Users/jakeshore/.clawdbot/workspace/reonomy-url-research-findings.md`
|
||||
|
||||
## Questions / Open Items
|
||||
- Should we migrate Reonomy scraper from Puppeteer to agent-browser?
|
||||
- Should we update scraper to extract from BOTH "Builder and Lot" AND "Owner" tabs?
|
||||
|
||||
## Decisions Made
|
||||
- agent-browser installed and tested
|
||||
- Reonomy scraper v9 workflow documented
|
||||
- Video clip redone with proper luminance analysis
|
||||
- **Reonomy URL research COMPLETED** — search ID encodes filters, direct URL construction confirmed
|
||||
|
||||
### ~18:07 UTC (13:07 EST) - Discord #general
|
||||
- User said "yes proceed" to creating new Reonomy scraper with agent-browser
|
||||
|
||||
**Created: reonomy-scraper-v10-agent-browser.js**
|
||||
|
||||
Key Improvements Over v9:
|
||||
- ✅ Uses agent-browser instead of Puppeteer (faster, refs, semantic locators)
|
||||
- ✅ State save/load for auth persistence (skip repeated login)
|
||||
- ✅ Extracts from BOTH "Builder and Lot" AND "Owner" tabs
|
||||
- ✅ Uses search ID from URL (direct ownership access, no clicking cards)
|
||||
- ✅ Uses user-provided phone CSS selector: `p.MuiTypography-root.jss1797.jss1798.MuiTypography-body2`
|
||||
|
||||
Workflow:
|
||||
1. Check for saved auth state (skip login if exists)
|
||||
2. Navigate to search using search ID: `https://app.reonomy.com/#!/search/${SEARCH_ID}`
|
||||
3. Extract property IDs from search results
|
||||
4. For each property:
|
||||
- Navigate directly to ownership page: `/search/${SEARCH_ID}/property/${id}/ownership`
|
||||
- Wait 8 seconds for page load
|
||||
- Extract Builder and Lot data: address, city, state, zip, SF, property type
|
||||
- Extract Owner tab data: owner names, emails, phones (using provided CSS selector)
|
||||
- Take screenshot (first 3 properties)
|
||||
5. Save to JSON: `reonomy-leads-v10-agent-browser.json`
|
||||
6. Save search ID for reuse: `reonomy-search-id.txt`
|
||||
|
||||
Usage:
|
||||
```bash
|
||||
# With pre-configured search ID
|
||||
SEARCH_ID="504a2d13-d88f-4213-9ac6-a7c8bc7c20c6" node reonomy-scraper-v10-agent-browser.js
|
||||
|
||||
# Or set as environment variable
|
||||
REONOMY_SEARCH_ID="your-search-id" node reonomy-scraper-v10-agent-browser.js
|
||||
|
||||
# After first run, scraper will auto-detect search ID from saved auth state
|
||||
```
|
||||
|
||||
Files:
|
||||
- `reonomy-scraper-v10-agent-browser.js` — Main scraper script
|
||||
- `reonomy-leads-v10-agent-browser.json` — Output leads
|
||||
- `reonomy-scraper-v10.log` — Detailed logs
|
||||
- `reonomy-auth-state.txt` — Saved auth state
|
||||
- `reonomy-search-id.txt` — Reusable search ID
|
||||
|
||||
## Decisions Made
|
||||
- Created new Reonomy scraper using agent-browser
|
||||
- Dual-tab extraction (Builder and Lot + Owner) implemented
|
||||
- Auth state persistence added
|
||||
- Direct ownership URL navigation (no property card clicking) implemented
|
||||
145
memory/2026-01-19-backup-system.md
Normal file
145
memory/2026-01-19-backup-system.md
Normal file
@ -0,0 +1,145 @@
|
||||
# Memory Log: Backup System for Computer Reset
|
||||
Date: 2026-01-19
|
||||
|
||||
---
|
||||
|
||||
## What Was Done
|
||||
|
||||
Created comprehensive backup and restore system to protect all work from computer reset.
|
||||
|
||||
---
|
||||
|
||||
## Files Created
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `backup_before_reset.sh` | Creates backup of all critical data |
|
||||
| `restore_after_reset.sh` | Restores all critical data |
|
||||
| `RESET-IMPACT-ANALYSIS.md` | Detailed analysis of what's at risk |
|
||||
| `BACKUP-RESTORE-QUICK-REF.md` | Quick reference guide |
|
||||
|
||||
---
|
||||
|
||||
## What's Protected
|
||||
|
||||
### 🔴 Critical (Lost on Reset - Must Restore)
|
||||
- **Cron jobs** (6 jobs)
|
||||
- Daily pickle motivation to Stevan (5:45 PM)
|
||||
- Daily anus fact (9:00 AM)
|
||||
- Daily Remix Sniper scan (9:00 AM)
|
||||
- Weekly remix stats update (Sundays 10:00 AM)
|
||||
- Weekly validation report (Sundays 11:00 AM)
|
||||
- Daily text to Jake Smith (9:10 PM)
|
||||
|
||||
- **Launchd service** (Remi bot auto-restart)
|
||||
- **PostgreSQL database** (remix_sniper schema)
|
||||
|
||||
- **Tracking data**
|
||||
- `predictions.json` (8 predictions)
|
||||
- `remixes.json` (1 remix outcome)
|
||||
- `snapshots/` (daily chart snapshots)
|
||||
|
||||
- **Environment files**
|
||||
- `DISCORD_BOT_TOKEN`
|
||||
- `DATABASE_URL`
|
||||
|
||||
### 🟡 Important (May Survive - But Backed Up)
|
||||
- Project code (`~/projects/remix-sniper/`)
|
||||
- Clawdbot workspace (`~/.clawdbot/workspace/`)
|
||||
- Custom scripts (`pickle_motivation.sh`, `daily-anus-fact.sh`)
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Before Reset
|
||||
```bash
|
||||
# 1. Run backup
|
||||
~/.clawdbot/workspace/backup_before_reset.sh
|
||||
|
||||
# 2. Copy to external storage
|
||||
rsync -av ~/.clawdbot/workspace/backup-before-reset-* /Volumes/ExternalDrive/
|
||||
```
|
||||
|
||||
### After Reset
|
||||
```bash
|
||||
# 1. Copy from external storage
|
||||
rsync -av /Volumes/ExternalDrive/backup-before-reset-* ~/.clawdbot/workspace/
|
||||
|
||||
# 2. Run restore
|
||||
~/.clawdbot/workspace/restore_after_reset.sh ~/.clawdbot/workspace/backup-before-reset-YYYYMMDD-HHMMSS
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Backup Contents
|
||||
|
||||
```
|
||||
backup-before-reset-YYYYMMDD-HHMMSS/
|
||||
├── crontab-backup.txt # 6 cron jobs
|
||||
├── launchd/ # plist files
|
||||
│ └── com.jakeshore.remix-sniper.plist
|
||||
├── remix_sniper-db.sql # Full DB dump
|
||||
├── remix-sniper/
|
||||
│ └── tracking/
|
||||
│ ├── predictions.json # 8 predictions
|
||||
│ ├── remixes.json # 1 remix outcome
|
||||
│ └── snapshots/ # Daily snapshots
|
||||
├── env-files/
|
||||
│ └── .env # API tokens, DB URL
|
||||
├── clawdbot-workspace/ # All workspace files
|
||||
├── scripts/ # Custom shell scripts
|
||||
└── sha256-checksums.txt # File integrity
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification Commands
|
||||
|
||||
```bash
|
||||
# Check cron jobs (should have 6)
|
||||
crontab -l
|
||||
|
||||
# Check launchd service
|
||||
launchctl list | grep remix-sniper
|
||||
|
||||
# Check database
|
||||
psql -d remix_sniper -c "\dt"
|
||||
|
||||
# Check tracking data
|
||||
ls -la ~/.remix-sniper/tracking/
|
||||
|
||||
# Check bot logs
|
||||
tail -f ~/projects/remix-sniper/bot.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Run backup** before reset
|
||||
2. **Copy to external storage**
|
||||
3. **Verify backup contents**
|
||||
4. **After reset, run restore script**
|
||||
5. **Run verification commands**
|
||||
6. **Test Remi bot in Discord**
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Backup scripts are executable
|
||||
- SHA256 checksums verify file integrity
|
||||
- Restore script verifies checksums before restoring
|
||||
- PostgreSQL database is currently empty (schema exists, no data)
|
||||
- All important data is in JSON files (predictions, remixes)
|
||||
|
||||
---
|
||||
|
||||
## Questions?
|
||||
|
||||
If anything goes wrong:
|
||||
1. Check backup directory exists
|
||||
2. Verify checksums: `shasum -c sha256-checksums.txt`
|
||||
3. Manually restore specific items if script fails
|
||||
4. Tag Buba in Discord
|
||||
159
memory/2026-01-19-cloud-backup.md
Normal file
159
memory/2026-01-19-cloud-backup.md
Normal file
@ -0,0 +1,159 @@
|
||||
# Memory Log: Cloud Backup System
|
||||
Date: 2026-01-19
|
||||
|
||||
---
|
||||
|
||||
## What Was Done
|
||||
|
||||
Created a **comprehensive cloud backup system** to protect all work from computer reset.
|
||||
|
||||
---
|
||||
|
||||
## Tools Installed
|
||||
|
||||
1. **rclone** - Universal cloud storage tool
|
||||
- Installed via Homebrew
|
||||
- Supports: Google Drive, Dropbox, DigitalOcean Spaces, AWS S3, OneDrive
|
||||
- Command: `rclone config` to set up
|
||||
|
||||
---
|
||||
|
||||
## Scripts Created
|
||||
|
||||
| Script | Purpose | Uses |
|
||||
|---------|----------|-------|
|
||||
| `backup_to_cloud.sh` | Upload all data to cloud storage | rclone |
|
||||
| `restore_from_cloud.sh` | Download from cloud storage | rclone |
|
||||
| `backup_to_github.sh` | Backup code to GitHub | git |
|
||||
|
||||
---
|
||||
|
||||
## Documentation Created
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `CLOUD-BACKUP-SETUP.md` | How to set up cloud storage |
|
||||
| `CLOUD-BACKUP-SYSTEM.md` | Complete guide to all backups |
|
||||
| `memory/2026-01-19-cloud-backup.md` | This file |
|
||||
|
||||
---
|
||||
|
||||
## Three-Tier Backup System
|
||||
|
||||
### Tier 1: Local Backup (Already Created)
|
||||
- Scripts: `backup_before_reset.sh`, `restore_after_reset.sh`
|
||||
- Location: `~/.clawdbot/workspace/backup-before-reset-*/`
|
||||
- Protects against: Computer reset (if user data preserved)
|
||||
- Risk: Lost if hard drive fails
|
||||
|
||||
### Tier 2: Cloud Backup (New)
|
||||
- Scripts: `backup_to_cloud.sh`, `restore_from_cloud.sh`
|
||||
- Location: Configured cloud storage (rclone remote)
|
||||
- Protects against: Computer reset, hard drive failure
|
||||
- Setup required: Run `rclone config`
|
||||
|
||||
### Tier 3: GitHub Backup (New)
|
||||
- Script: `backup_to_github.sh`
|
||||
- Location: GitHub repository
|
||||
- Protects against: Code loss
|
||||
- Setup required: Create GitHub repo
|
||||
|
||||
---
|
||||
|
||||
## Recommended Cloud Provider
|
||||
|
||||
**DigitalOcean Spaces** (recommended)
|
||||
- Why: Jake already has DO account
|
||||
- Free tier: 250 GB for 10 months
|
||||
- S3-compatible: Works with rclone
|
||||
- Fast: Good upload/download speeds
|
||||
|
||||
---
|
||||
|
||||
## What Gets Protected
|
||||
|
||||
### Critical Data (All 3 tiers)
|
||||
- Cron jobs (6 jobs)
|
||||
- Launchd services
|
||||
- PostgreSQL database
|
||||
- Tracking data (predictions, remixes)
|
||||
- Environment files (.env)
|
||||
- Clawdbot workspace
|
||||
- Custom scripts
|
||||
|
||||
### Code Only (Tier 3 - GitHub)
|
||||
- Remix Sniper source code
|
||||
- Scrapers, analyzers, models
|
||||
- Bot commands, cogs
|
||||
|
||||
---
|
||||
|
||||
## How to Use
|
||||
|
||||
### Set Up Cloud Storage (One-Time)
|
||||
```bash
|
||||
rclone config
|
||||
# Follow prompts for DigitalOcean Spaces
|
||||
# Name: do-spaces
|
||||
# Type: s3 -> DigitalOcean
|
||||
# API key: Get from DO dashboard
|
||||
# Endpoint: https://nyc3.digitaloceanspaces.com
|
||||
```
|
||||
|
||||
### Run Cloud Backup
|
||||
```bash
|
||||
~/.clawdbot/workspace/backup_to_cloud.sh do-spaces
|
||||
```
|
||||
|
||||
### Run GitHub Backup
|
||||
```bash
|
||||
cd ~/projects/remix-sniper
|
||||
~/.clawdbot/workspace/backup_to_github.sh jakeshore remix-sniper
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Before Reset Checklist
|
||||
|
||||
- [ ] Run local backup: `~/.clawdbot/workspace/backup_before_reset.sh`
|
||||
- [ ] Run cloud backup: `~/.clawdbot/workspace/backup_to_cloud.sh do-spaces`
|
||||
- [ ] Push to GitHub: `git push`
|
||||
- [ ] Copy backups to external drive
|
||||
|
||||
---
|
||||
|
||||
## After Reset Checklist
|
||||
|
||||
- [ ] Install tools: `brew install postgresql@16 rclone git`
|
||||
- [ ] Restore from cloud: `~/.clawdbot/workspace/restore_from_cloud.sh do-spaces ...`
|
||||
- [ ] Clone from GitHub: `git clone https://github.com/jakeshore/remix-sniper.git ~/projects/remix-sniper`
|
||||
- [ ] Run verification commands (see CLOUD-BACKUP-SYSTEM.md)
|
||||
|
||||
---
|
||||
|
||||
## 1Password Status
|
||||
|
||||
- CLI version: 2.32.0
|
||||
- Account found: jakeshore98@gmail.com (FNFLSCM5NNDQ3IMHEX2WZJZAKM)
|
||||
- Issue: Desktop app needs update for CLI integration
|
||||
- Workaround: Manual setup of cloud storage (rclone config)
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- rclone is installed and ready to use
|
||||
- All backup scripts are executable
|
||||
- 1Password CLI can't authenticate due to app version issue
|
||||
- User will need to manually configure rclone with cloud provider credentials
|
||||
- Google Places API key found in environment (for future use)
|
||||
|
||||
---
|
||||
|
||||
## Next Steps for User
|
||||
|
||||
1. Update 1Password desktop app
|
||||
2. Run `rclone config` to set up cloud storage
|
||||
3. Run first cloud backup
|
||||
4. Create GitHub repo
|
||||
5. Test restore from cloud
|
||||
191
memory/2026-01-19-remix-sniper-setup.md
Normal file
191
memory/2026-01-19-remix-sniper-setup.md
Normal file
@ -0,0 +1,191 @@
|
||||
# Memory Log: Remix Sniper Setup
|
||||
Date: 2026-01-19
|
||||
|
||||
---
|
||||
|
||||
## What Was Done Today
|
||||
|
||||
### 1. PostgreSQL Installation & Database Setup
|
||||
- Installed PostgreSQL 16 via Homebrew
|
||||
- Started postgresql@16 service
|
||||
- Created `remix_sniper` database
|
||||
- Initialized database tables (`scripts/scan.py init-db`)
|
||||
- Added `DATABASE_URL` to `.env`
|
||||
|
||||
### 2. Bot Auto-Restart Configuration
|
||||
- Created launchd plist: `~/Library/LaunchAgents/com.jakeshore.remix-sniper.plist`
|
||||
- Configured KeepAlive (auto-restart on crash)
|
||||
- Set up structured logging
|
||||
- Bot now runs as system service with PID 47883
|
||||
|
||||
### 3. Cron Jobs Scheduled
|
||||
- **Daily scan**: 9am EST - `scripts/daily_scan.py`
|
||||
- **Weekly stats update**: Sunday 10am - `scripts/update_remix_stats.py`
|
||||
- **Weekly validation report**: Sunday 11am - `scripts/weekly_report.py`
|
||||
|
||||
### 4. New Scripts Created
|
||||
|
||||
**`scripts/update_remix_stats.py`**
|
||||
- Updates remix play counts from external APIs
|
||||
- Supports SoundCloud and YouTube (API keys needed)
|
||||
- Tracks milestone plays (day 1, 7, 30, 90)
|
||||
|
||||
**`scripts/weekly_report.py`**
|
||||
- Generates weekly validation report
|
||||
- Shows summary stats, outcomes, validation metrics
|
||||
- Identifies pending remixes needing updates
|
||||
- Provides recommendations
|
||||
|
||||
### 5. Documentation Created
|
||||
- `~/projects/remix-sniper/SETUP_SUMMARY.md` - Complete setup summary
|
||||
- `~/.clawdbot/workspace/remix-sniper-skill.md` - Quick reference for Buba
|
||||
- Updated `USER.md` with project notes
|
||||
|
||||
### 6. Testing
|
||||
- Verified bot is running (`launchctl list | grep remix-sniper`)
|
||||
- Ran weekly report script successfully
|
||||
- Confirmed database connection works
|
||||
|
||||
---
|
||||
|
||||
## Current System State
|
||||
|
||||
### Bot Status
|
||||
- ✅ Running (PID: 47883)
|
||||
- ✅ Auto-restart enabled (launchd KeepAlive)
|
||||
- ✅ 9 Discord commands loaded
|
||||
- ✅ Logging to `bot.log`
|
||||
|
||||
### Database
|
||||
- ✅ Postgres 16 running
|
||||
- ✅ `remix_sniper` database created
|
||||
- ✅ Tables initialized
|
||||
- ✅ 8 predictions tracked, 1 remix outcome
|
||||
|
||||
### Tracking Data
|
||||
- Location: `~/.remix-sniper/tracking/`
|
||||
- `predictions.json`: 8 predictions
|
||||
- `remixes.json`: 1 remix record (test)
|
||||
- `snapshots/`: Daily chart snapshots
|
||||
|
||||
---
|
||||
|
||||
## Next Steps / Outstanding Items
|
||||
|
||||
### High Priority
|
||||
1. **Track more remix outcomes** - Need 10+ for meaningful validation
|
||||
2. **Add Discord webhook URL** - For daily digests and urgent alerts
|
||||
3. **Test daily scan** - Verify cron job runs tomorrow at 9am
|
||||
|
||||
### Medium Priority
|
||||
1. **Add API keys** for enhanced features:
|
||||
- Spotify API (song metadata, audio features)
|
||||
- YouTube Data API v3 (remix stats updates)
|
||||
- SoundCloud API v2 (remix stats updates)
|
||||
- Chartmetric (professional charts)
|
||||
|
||||
2. **Refine scoring model** - After 10+ outcomes:
|
||||
- Run backtest on existing data
|
||||
- Adjust weights based on factor importance
|
||||
- Genre-specific tuning (EDM, pop, hip-hop)
|
||||
|
||||
3. **Visualization dashboard** - Consider Streamlit or Next.js for:
|
||||
- Top opportunities by chart/genre
|
||||
- Historical performance of predictions
|
||||
- Remix success rates
|
||||
|
||||
### Low Priority
|
||||
1. **Expand data sources**:
|
||||
- Beatport (EDM niche)
|
||||
- Discogs (label catalog)
|
||||
- Hype Machine (indie/underground)
|
||||
|
||||
2. **Automated remix creation**:
|
||||
- Ableton project template generation
|
||||
- Key/BPM info pre-populated
|
||||
- Sample pack suggestions
|
||||
|
||||
---
|
||||
|
||||
## Commands Reference
|
||||
|
||||
### Bot Management
|
||||
```bash
|
||||
# Check status
|
||||
launchctl list | grep remix-sniper
|
||||
|
||||
# Restart
|
||||
launchctl restart com.jakeshore.remix-sniper
|
||||
|
||||
# View logs
|
||||
tail -f ~/projects/remix-sniper/bot.log
|
||||
```
|
||||
|
||||
### Manual Operations
|
||||
```bash
|
||||
cd ~/projects/remix-sniper
|
||||
source venv/bin/activate
|
||||
|
||||
# Run manual scan
|
||||
python scripts/scan.py scan --chart all --limit 20
|
||||
|
||||
# Run daily scan with alerts
|
||||
python scripts/daily_scan.py
|
||||
|
||||
# Update remix stats
|
||||
python scripts/update_remix_stats.py
|
||||
|
||||
# Generate weekly report
|
||||
python scripts/weekly_report.py
|
||||
```
|
||||
|
||||
### Database
|
||||
```bash
|
||||
# Connect to database
|
||||
/opt/homebrew/opt/postgresql@16/bin/psql -d remix_sniper
|
||||
|
||||
# Restart Postgres
|
||||
brew services restart postgresql@16
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `~/projects/remix-sniper/SETUP_SUMMARY.md` | Complete setup documentation |
|
||||
| `~/.clawdbot/workspace/remix-sniper-skill.md` | Quick reference for Buba |
|
||||
| `~/projects/remix-sniper/.env` | Environment variables (API keys) |
|
||||
| `~/Library/LaunchAgents/com.jakeshore.remix-sniper.plist` | Bot auto-restart config |
|
||||
|
||||
---
|
||||
|
||||
## Validation Goal
|
||||
|
||||
**Track 10+ remix outcomes** for meaningful validation metrics
|
||||
|
||||
Current state: 1/10 outcomes tracked (10%)
|
||||
|
||||
After each remix:
|
||||
```bash
|
||||
cd ~/projects/remix-sniper
|
||||
source venv/bin/activate
|
||||
python -c "
|
||||
from packages.core.tracking.tracker import DatasetTracker
|
||||
from packages.core.database.models import RemixOutcome
|
||||
|
||||
tracker = DatasetTracker()
|
||||
# Update stats for your remix
|
||||
tracker.update_remix_stats(remix_id, plays=50000, outcome=RemixOutcome.SUCCESS)
|
||||
"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Bot is named "Remi" in Discord (ID: 1462921957392646330)
|
||||
- Shazam charts are currently the primary data source (Tier 1)
|
||||
- Spotify API credentials are optional but would enable audio features
|
||||
- TikTok is the #1 predictor (30% weight) - high importance
|
||||
Loading…
x
Reference in New Issue
Block a user