MCP Pipeline + workspace sync — 2026-02-06
=== WHAT'S BEEN DONE (Recent) === MCP Pipeline Factory: - 38 MCP servers tracked across 7 pipeline stages - 31 servers at Stage 16 (Website Built) — ready to deploy - All 30 production servers patched to 100/100 protocol compliance - Built complete testing infra: mcp-jest, mcp-validator, mcp-add, MCP Inspector - 702 auto-generated test cases ready for live API testing - Autonomous pipeline operator system w/ 7 Discord channels + cron jobs - Dashboard live at 192.168.0.25:8888 (drag-drop kanban) CloseBot MCP: - 119 tools, 4,656 lines TypeScript, compiles clean - 14 modules (8 tool groups + 6 UI apps) GHL MCP: - Stage 11 (Edge Case Testing) — 42 failing tests identified Sub-agent _meta Labels: - All 643 tools across 5 MCPs tagged (GHL, Google Ads, Meta Ads, Google Console, Twilio) OpenClaw Upwork Launch: - 15 graphics, 6 mockups, 2 PDFs, 90-sec Remotion video - 3-tier pricing: $2,499 / $7,499 / $24,999 - First $20k deal closed + $2k/mo retainer (hospice) Other: - Surya Blender animation scripts (7 tracks) - Clawdbot architecture deep dive doc - Pipeline state.json updates === TO-DO (Open Items) === BLOCKERS: - [ ] GHL MCP: Fix 42 failing edge case tests (Stage 11) - [ ] Expired Anthropic API key in localbosses-app .env.local - [ ] Testing strategy decision: structural vs live API vs hybrid NEEDS API KEYS (can't progress without): - [ ] Meta Ads MCP — needs META_ADS_API_KEY for Stage 8→9 - [ ] Twilio MCP — needs TWILIO_API_KEY for Stage 8→9 - [ ] CloseBot MCP — needs CLOSEBOT_API_KEY for live testing - [ ] 702 test cases across all servers need live API credentials PIPELINE ADVANCEMENT: - [ ] Stage 7→8: CloseBot + Google Console need design approval - [ ] Stage 6→7: 22 servers need UI apps built - [ ] Stage 5→6: 5 servers need core tools built (FreshBooks, Gusto, Jobber, Keap, Lightspeed) - [ ] Stage 1→5: 3 new MCPs need scaffolding (Compliance GRC, HR People Ops, Product Analytics) PENDING REVIEW: - [ ] Jake review OpenClaw video + gallery → finalize Upwork listing - [ ] LocalBosses UI redesign (Steve Jobs critique delivered, recs available) QUEUED PROJECTS: - [ ] SongSense AI music analysis product (architecture done, build not started) - [ ] 8-Week Agent Study Plan execution (curriculum posted, Week 1 not started)
This commit is contained in:
parent
0f4e71179d
commit
d5e86e050b
659
clawdbot-architecture-deep-dive.md
Normal file
659
clawdbot-architecture-deep-dive.md
Normal file
@ -0,0 +1,659 @@
|
||||
# 🦞 Clawdbot Architecture Deep Dive
|
||||
|
||||
> A comprehensive technical breakdown of Clawdbot's codebase, prompting system, and internal architecture.
|
||||
|
||||
---
|
||||
|
||||
## High-Level Overview
|
||||
|
||||
Clawdbot is a **TypeScript/Node.js application** (v22.12+) that acts as a universal gateway between messaging platforms and AI agents. Think of it as a sophisticated message router with an embedded AI brain.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ MESSAGING CHANNELS │
|
||||
│ Discord │ Telegram │ WhatsApp │ Signal │ iMessage │ Slack │
|
||||
└─────────────────────────────┬───────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ GATEWAY SERVER │
|
||||
│ - WebSocket control plane (ws://127.0.0.1:18789) │
|
||||
│ - HTTP server (control UI, Canvas, OpenAI-compat endpoints) │
|
||||
│ - Session management │
|
||||
│ - Cron scheduler │
|
||||
│ - Node pairing (iOS/Android/macOS) │
|
||||
└─────────────────────────────┬───────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ AGENT LAYER │
|
||||
│ - Pi coding agent (embedded via @mariozechner packages) │
|
||||
│ - System prompt generation │
|
||||
│ - Tool definitions & policy enforcement │
|
||||
│ - Sub-agent spawning │
|
||||
│ - Model routing (Anthropic, OpenAI, Gemini, Bedrock, etc.) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
/opt/homebrew/lib/node_modules/clawdbot/
|
||||
├── dist/ # Compiled JavaScript (~800 files)
|
||||
│ ├── agents/ # Agent runtime, tools, system prompt
|
||||
│ ├── gateway/ # Gateway server implementation
|
||||
│ ├── channels/ # Channel plugin system
|
||||
│ ├── cli/ # CLI commands
|
||||
│ ├── config/ # Configuration loading/validation
|
||||
│ ├── browser/ # Playwright browser automation
|
||||
│ ├── cron/ # Scheduled jobs
|
||||
│ ├── memory/ # Semantic memory search
|
||||
│ ├── sessions/ # Session management
|
||||
│ ├── plugins/ # Plugin SDK
|
||||
│ ├── infra/ # Infrastructure utilities
|
||||
│ └── ...
|
||||
├── docs/ # Documentation (~50 files)
|
||||
├── skills/ # Built-in skills (~50 SKILL.md packages)
|
||||
├── extensions/ # Channel extensions
|
||||
├── assets/ # Static assets
|
||||
└── package.json # Dependencies & scripts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Core Components
|
||||
|
||||
### 1. Entry Point (`dist/entry.js`)
|
||||
|
||||
The CLI entry point that:
|
||||
- Sets `process.title = "clawdbot"`
|
||||
- Suppresses Node.js experimental warnings
|
||||
- Handles Windows path normalization
|
||||
- Loads CLI profiles
|
||||
- Dispatches to `cli/run-main.js`
|
||||
|
||||
```javascript
|
||||
#!/usr/bin/env node
|
||||
process.title = "clawdbot";
|
||||
installProcessWarningFilter();
|
||||
|
||||
// Handle profile args, then bootstrap CLI
|
||||
import("./cli/run-main.js")
|
||||
.then(({ runCli }) => runCli(process.argv))
|
||||
```
|
||||
|
||||
### 2. Gateway Server (`dist/gateway/server.impl.js`)
|
||||
|
||||
The heart of Clawdbot — a single long-running process that owns:
|
||||
|
||||
| Subsystem | Purpose |
|
||||
|-----------|---------|
|
||||
| Config loader | Reads/validates `~/.clawdbot/clawdbot.yaml` |
|
||||
| Plugin registry | Loads channel & tool plugins |
|
||||
| Channel manager | Manages Discord/Telegram/WhatsApp connections |
|
||||
| Session manager | Isolates conversations, tracks history |
|
||||
| Cron service | Scheduled jobs & reminders |
|
||||
| Node registry | Mobile/desktop node pairing |
|
||||
| TLS runtime | Secure connections |
|
||||
| Control UI | Browser dashboard at `:18789` |
|
||||
| Health monitor | Gateway health & presence |
|
||||
|
||||
**Key Gateway Files:**
|
||||
- `server-channels.js` — Channel connection lifecycle
|
||||
- `server-chat.js` — Message → Agent routing
|
||||
- `server-cron.js` — Scheduled jobs & reminders
|
||||
- `server-bridge-*.js` — WebSocket control plane methods
|
||||
- `server-http.js` — HTTP endpoints
|
||||
- `server-providers.js` — Model provider management
|
||||
|
||||
### 3. Agent System (`dist/agents/`)
|
||||
|
||||
This is where the AI "brain" lives.
|
||||
|
||||
#### System Prompt Generation (`system-prompt.js`)
|
||||
|
||||
The `buildAgentSystemPrompt()` function dynamically constructs the system prompt based on runtime context:
|
||||
|
||||
```typescript
|
||||
export function buildAgentSystemPrompt(params) {
|
||||
// Sections built dynamically:
|
||||
const lines = [
|
||||
"You are a personal assistant running inside Clawdbot.",
|
||||
"",
|
||||
"## Tooling",
|
||||
// ... tool availability list
|
||||
"",
|
||||
"## Tool Call Style",
|
||||
// ... narration guidelines
|
||||
"",
|
||||
"## Clawdbot CLI Quick Reference",
|
||||
// ... CLI commands
|
||||
"",
|
||||
...buildSkillsSection(params), // Available skills
|
||||
...buildMemorySection(params), // Memory recall instructions
|
||||
...buildDocsSection(params), // Documentation paths
|
||||
...buildMessagingSection(params), // Messaging guidelines
|
||||
...buildReplyTagsSection(params), // [[reply_to_current]] etc.
|
||||
// ...
|
||||
"## Runtime",
|
||||
buildRuntimeLine(runtimeInfo), // Model, channel, capabilities
|
||||
];
|
||||
|
||||
// Inject project context files
|
||||
for (const file of contextFiles) {
|
||||
lines.push(`## ${file.path}`, "", file.content, "");
|
||||
}
|
||||
|
||||
return lines.filter(Boolean).join("\n");
|
||||
}
|
||||
```
|
||||
|
||||
**System Prompt Sections:**
|
||||
|
||||
| Section | Purpose |
|
||||
|---------|---------|
|
||||
| Tooling | Lists available tools with descriptions |
|
||||
| Tool Call Style | When to narrate vs. just call tools |
|
||||
| CLI Quick Reference | Gateway management commands |
|
||||
| Skills | Available SKILL.md files to read |
|
||||
| Memory Recall | How to use memory_search/memory_get |
|
||||
| Self-Update | Config/update restrictions |
|
||||
| Model Aliases | opus, sonnet shortcuts |
|
||||
| Workspace | Working directory info |
|
||||
| Documentation | Docs paths |
|
||||
| Reply Tags | Native reply/quote syntax |
|
||||
| Messaging | Channel routing rules |
|
||||
| Silent Replies | NO_REPLY handling |
|
||||
| Heartbeats | HEARTBEAT_OK protocol |
|
||||
| Runtime | Model, channel, capabilities |
|
||||
| Project Context | AGENTS.md, SOUL.md, USER.md, etc. |
|
||||
|
||||
The final prompt is **~2000-3000 tokens** depending on configuration.
|
||||
|
||||
#### Tools System (`dist/agents/tools/`)
|
||||
|
||||
Each tool is a separate module with schema definition and handler:
|
||||
|
||||
| Tool | File | Purpose |
|
||||
|------|------|---------|
|
||||
| `exec` | `bash-tools.exec.js` | Shell command execution (54KB!) |
|
||||
| `process` | `bash-tools.process.js` | Background process management |
|
||||
| `browser` | `browser-tool.js` | Playwright browser control |
|
||||
| `canvas` | `canvas-tool.js` | Present/eval/snapshot Canvas |
|
||||
| `cron` | `cron-tool.js` | Scheduled jobs & reminders |
|
||||
| `gateway` | `gateway-tool.js` | Self-management (restart, update) |
|
||||
| `message` | `message-tool.js` | Cross-channel messaging |
|
||||
| `nodes` | `nodes-tool.js` | Mobile node camera/screen/location |
|
||||
| `sessions_list` | `sessions-list-tool.js` | List sessions |
|
||||
| `sessions_history` | `sessions-history-tool.js` | Fetch session history |
|
||||
| `sessions_send` | `sessions-send-tool.js` | Send to another session |
|
||||
| `sessions_spawn` | `sessions-spawn-tool.js` | Spawn sub-agent |
|
||||
| `session_status` | `session-status-tool.js` | Usage/cost/model info |
|
||||
| `agents_list` | `agents-list-tool.js` | List spawnable agents |
|
||||
| `web_search` | `web-search.js` | Brave API search |
|
||||
| `web_fetch` | `web-fetch.js` | URL content extraction |
|
||||
| `image` | `image-tool.js` | Vision model analysis |
|
||||
| `memory_search` | `memory-tool.js` | Semantic memory search |
|
||||
| `memory_get` | `memory-tool.js` | Read memory snippets |
|
||||
| `tts` | `tts-tool.js` | Text-to-speech |
|
||||
|
||||
**Tool Policy Enforcement (`pi-tools.policy.js`):**
|
||||
|
||||
Tools are filtered through multiple policy layers:
|
||||
1. Global policy (`tools.policy` in config)
|
||||
2. Provider-specific policy
|
||||
3. Agent-specific policy
|
||||
4. Group chat policy
|
||||
5. Sandbox policy
|
||||
6. Sub-agent policy
|
||||
|
||||
```typescript
|
||||
const isAllowed = isToolAllowedByPolicies("exec", [
|
||||
profilePolicy,
|
||||
providerProfilePolicy,
|
||||
globalPolicy,
|
||||
globalProviderPolicy,
|
||||
agentPolicy,
|
||||
agentProviderPolicy,
|
||||
groupPolicy,
|
||||
sandbox?.tools,
|
||||
subagentPolicy,
|
||||
]);
|
||||
```
|
||||
|
||||
#### Pi Integration (`dist/agents/pi-*.js`)
|
||||
|
||||
Clawdbot embeds [Pi coding agent](https://github.com/badlogic/pi-mono) as its core AI runtime:
|
||||
|
||||
```json
|
||||
// Dependencies from package.json:
|
||||
{
|
||||
"@mariozechner/pi-agent-core": "0.49.3",
|
||||
"@mariozechner/pi-ai": "0.49.3",
|
||||
"@mariozechner/pi-coding-agent": "0.49.3",
|
||||
"@mariozechner/pi-tui": "0.49.3"
|
||||
}
|
||||
```
|
||||
|
||||
**Key Pi Integration Files:**
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `pi-embedded-runner.js` | Spawns Pi agent sessions |
|
||||
| `pi-embedded-subscribe.js` | Handles streaming responses |
|
||||
| `pi-embedded-subscribe.handlers.*.js` | Message/tool event handlers |
|
||||
| `pi-embedded-utils.js` | Utilities for Pi integration |
|
||||
| `pi-tools.js` | Tool definition adapter |
|
||||
| `pi-tools.policy.js` | Tool allowlist/denylist |
|
||||
| `pi-tools.read.js` | Read tool customization |
|
||||
| `pi-tools.schema.js` | Schema normalization |
|
||||
| `pi-settings.js` | Pi agent settings |
|
||||
|
||||
### 4. Channel Plugins (`dist/channels/plugins/`)
|
||||
|
||||
Each messaging platform is a plugin:
|
||||
|
||||
```
|
||||
channels/plugins/
|
||||
├── discord/ # Discord.js integration
|
||||
├── telegram/ # grammY framework
|
||||
├── whatsapp/ # Baileys (WhatsApp Web protocol)
|
||||
├── signal/ # Signal CLI bridge
|
||||
├── imessage/ # macOS imsg CLI
|
||||
├── bluebubbles/ # BlueBubbles API
|
||||
├── slack/ # Slack Bolt
|
||||
├── line/ # LINE Bot SDK
|
||||
├── mattermost/ # WebSocket events
|
||||
├── googlechat/ # Google Chat API
|
||||
└── ...
|
||||
```
|
||||
|
||||
**Channel Plugin Interface:**
|
||||
Each plugin implements:
|
||||
- `connect()` — Establish connection
|
||||
- `disconnect()` — Clean shutdown
|
||||
- `send()` — Deliver messages
|
||||
- `onMessage()` — Handle incoming messages
|
||||
- Channel-specific actions (reactions, polls, threads, etc.)
|
||||
|
||||
**Channel Registry (`dist/channels/registry.js`):**
|
||||
```typescript
|
||||
// Plugins register themselves
|
||||
registerChannelPlugin({
|
||||
id: "discord",
|
||||
displayName: "Discord",
|
||||
connect: async (config) => { ... },
|
||||
send: async (message) => { ... },
|
||||
// ...
|
||||
});
|
||||
```
|
||||
|
||||
### 5. Skills System (`skills/`)
|
||||
|
||||
Skills are self-contained instruction packages that teach the agent how to use external tools:
|
||||
|
||||
```
|
||||
skills/
|
||||
├── github/SKILL.md # gh CLI usage
|
||||
├── gog/SKILL.md # Google Workspace CLI
|
||||
├── spotify-player/SKILL.md # Spotify control
|
||||
├── weather/SKILL.md # wttr.in integration
|
||||
├── bear-notes/SKILL.md # Bear notes via grizzly
|
||||
├── apple-notes/SKILL.md # memo CLI
|
||||
├── apple-reminders/SKILL.md # remindctl CLI
|
||||
├── obsidian/SKILL.md # Obsidian vault management
|
||||
├── notion/SKILL.md # Notion API
|
||||
├── himalaya/SKILL.md # Email via IMAP/SMTP
|
||||
├── openhue/SKILL.md # Philips Hue control
|
||||
├── camsnap/SKILL.md # RTSP camera capture
|
||||
└── ...
|
||||
```
|
||||
|
||||
**Skill Loading Flow:**
|
||||
1. System prompt includes skill descriptions in `<available_skills>`
|
||||
2. Agent scans descriptions to find matching skill
|
||||
3. Agent calls `read` tool to load SKILL.md
|
||||
4. Agent follows instructions in SKILL.md
|
||||
|
||||
**Skill File Structure:**
|
||||
```markdown
|
||||
# SKILL.md - [Tool Name]
|
||||
|
||||
## When to use
|
||||
Description of when this skill applies.
|
||||
|
||||
## Commands
|
||||
```bash
|
||||
tool-name command --flags
|
||||
```
|
||||
|
||||
## Examples
|
||||
...
|
||||
```
|
||||
|
||||
### 6. Memory System (`dist/memory/`)
|
||||
|
||||
Semantic search over workspace memory files:
|
||||
|
||||
**Memory Files:**
|
||||
- `MEMORY.md` — Root memory file
|
||||
- `memory/*.md` — Dated logs, research intel, project notes
|
||||
|
||||
**Memory Tools:**
|
||||
- `memory_search` — Semantic vector search using `sqlite-vec`
|
||||
- `memory_get` — Read specific lines from memory files
|
||||
|
||||
```typescript
|
||||
// memory-search.js
|
||||
import SqliteVec from "sqlite-vec";
|
||||
|
||||
async function searchMemory(query: string, options: SearchOptions) {
|
||||
// Embed query
|
||||
const embedding = await embedText(query);
|
||||
|
||||
// Vector similarity search
|
||||
const results = await db.query(`
|
||||
SELECT path, line_start, line_end, content,
|
||||
vec_distance_cosine(embedding, ?) as distance
|
||||
FROM memory_chunks
|
||||
ORDER BY distance
|
||||
LIMIT ?
|
||||
`, [embedding, options.maxResults]);
|
||||
|
||||
return results;
|
||||
}
|
||||
```
|
||||
|
||||
### 7. Session Management (`dist/gateway/session-utils.js`)
|
||||
|
||||
Sessions isolate conversations and track state:
|
||||
|
||||
**Session Key Format:**
|
||||
```
|
||||
{channel}:{accountId}:{chatId}
|
||||
discord:main:938238002528911400
|
||||
telegram:main:123456789
|
||||
whatsapp:main:1234567890@s.whatsapp.net
|
||||
```
|
||||
|
||||
**Session State:**
|
||||
- Conversation history
|
||||
- Model override (if any)
|
||||
- Reasoning level
|
||||
- Active tool calls
|
||||
- Sub-agent references
|
||||
|
||||
**Session Files:**
|
||||
```
|
||||
~/.clawdbot/sessions/
|
||||
├── discord-main-938238002528911400.json
|
||||
├── telegram-main-123456789.json
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Message Flow
|
||||
|
||||
```
|
||||
1. User sends message on Discord/Telegram/WhatsApp/etc.
|
||||
│
|
||||
▼
|
||||
2. Channel plugin receives message
|
||||
- Parses sender, chat ID, content
|
||||
- Handles media attachments
|
||||
- Checks mention gating (groups)
|
||||
│
|
||||
▼
|
||||
3. Gateway routes to session
|
||||
- Resolves session key
|
||||
- Loads/creates session
|
||||
- Checks activation rules
|
||||
│
|
||||
▼
|
||||
4. Session loads context:
|
||||
- System prompt (generated dynamically)
|
||||
- Project context files (AGENTS.md, SOUL.md, USER.md)
|
||||
- Conversation history
|
||||
- Tool availability
|
||||
│
|
||||
▼
|
||||
5. Pi agent processes with configured model
|
||||
- Anthropic (Claude)
|
||||
- OpenAI (GPT-4, o1, etc.)
|
||||
- Google (Gemini)
|
||||
- AWS Bedrock
|
||||
- Local (Ollama, llama.cpp)
|
||||
│
|
||||
▼
|
||||
6. Agent may call tools
|
||||
- Tool policy checked
|
||||
- Tool executed
|
||||
- Results fed back to agent
|
||||
- Loop until done
|
||||
│
|
||||
▼
|
||||
7. Response streamed/chunked back
|
||||
- Long responses chunked for Telegram
|
||||
- Markdown formatted per channel
|
||||
- Media attachments handled
|
||||
│
|
||||
▼
|
||||
8. Channel plugin delivers message
|
||||
- Native formatting applied
|
||||
- Reply threading if requested
|
||||
- Reactions/buttons if configured
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
Config lives at `~/.clawdbot/clawdbot.yaml`:
|
||||
|
||||
```yaml
|
||||
# Model Providers
|
||||
providers:
|
||||
anthropic:
|
||||
key: "sk-ant-..."
|
||||
openai:
|
||||
key: "sk-..."
|
||||
google:
|
||||
key: "..."
|
||||
|
||||
# Default Model
|
||||
defaultModel: "anthropic/claude-sonnet-4-5"
|
||||
|
||||
# Channel Configs
|
||||
discord:
|
||||
token: "..."
|
||||
defaultModel: "anthropic/claude-opus-4-5"
|
||||
|
||||
telegram:
|
||||
token: "..."
|
||||
|
||||
whatsapp:
|
||||
enabled: true
|
||||
|
||||
# Agent Config
|
||||
agent:
|
||||
workspaceDir: "~/.clawdbot/workspace"
|
||||
|
||||
agents:
|
||||
main:
|
||||
workspaceDir: "~/.clawdbot/workspace"
|
||||
|
||||
# Tool Policies
|
||||
tools:
|
||||
exec:
|
||||
security: "full" # full | allowlist | deny
|
||||
host: "sandbox" # sandbox | gateway | node
|
||||
browser:
|
||||
profile: "clawd"
|
||||
|
||||
# Cron Jobs
|
||||
cron:
|
||||
jobs:
|
||||
- id: "daily-standup"
|
||||
schedule: "0 9 * * *"
|
||||
text: "Good morning! What's on the agenda?"
|
||||
|
||||
# Gateway Settings
|
||||
gateway:
|
||||
port: 18789
|
||||
token: "..."
|
||||
```
|
||||
|
||||
**Config Schema:**
|
||||
Full schema at `dist/protocol.schema.json` (~80KB of JSON Schema)
|
||||
|
||||
---
|
||||
|
||||
## Key Dependencies
|
||||
|
||||
```json
|
||||
{
|
||||
// AI Agent Core
|
||||
"@mariozechner/pi-agent-core": "0.49.3",
|
||||
"@mariozechner/pi-ai": "0.49.3",
|
||||
"@mariozechner/pi-coding-agent": "0.49.3",
|
||||
|
||||
// Messaging Channels
|
||||
"discord-api-types": "^0.38.37",
|
||||
"grammy": "^1.39.3",
|
||||
"@whiskeysockets/baileys": "7.0.0-rc.9",
|
||||
"@slack/bolt": "^4.6.0",
|
||||
"@line/bot-sdk": "^10.6.0",
|
||||
|
||||
// Browser Automation
|
||||
"playwright-core": "1.58.0",
|
||||
"chromium-bidi": "13.0.1",
|
||||
|
||||
// Vector Search
|
||||
"sqlite-vec": "0.1.7-alpha.2",
|
||||
|
||||
// Image Processing
|
||||
"sharp": "^0.34.5",
|
||||
"@napi-rs/canvas": "^0.1.88",
|
||||
|
||||
// TTS
|
||||
"node-edge-tts": "^1.2.9",
|
||||
|
||||
// Scheduling
|
||||
"croner": "^9.1.0",
|
||||
|
||||
// HTTP/WebSocket
|
||||
"hono": "4.11.4",
|
||||
"ws": "^8.19.0",
|
||||
"undici": "^7.19.0",
|
||||
|
||||
// Schema Validation
|
||||
"zod": "^4.3.6",
|
||||
"@sinclair/typebox": "0.34.47",
|
||||
"ajv": "^8.17.1"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Architectural Decisions
|
||||
|
||||
### 1. Single Gateway Process
|
||||
One process owns all channel connections to avoid session conflicts (especially WhatsApp Web which only allows one active session).
|
||||
|
||||
### 2. Pi as Core Runtime
|
||||
Leverages Pi coding agent's battle-tested:
|
||||
- Tool streaming
|
||||
- Context management
|
||||
- Multi-provider support
|
||||
- Structured output handling
|
||||
|
||||
### 3. Dynamic System Prompt
|
||||
Built at runtime based on:
|
||||
- Available tools (policy-filtered)
|
||||
- Available skills
|
||||
- Current channel
|
||||
- User configuration
|
||||
- Project context files
|
||||
|
||||
### 4. Plugin Architecture
|
||||
Everything is pluggable:
|
||||
- Channels (Discord, Telegram, etc.)
|
||||
- Tools (exec, browser, etc.)
|
||||
- Hooks (voice transcription, media processing)
|
||||
- Skills (external instruction packages)
|
||||
|
||||
### 5. Session Isolation
|
||||
Each conversation gets isolated:
|
||||
- History
|
||||
- Model settings
|
||||
- Tool state
|
||||
- Sub-agent references
|
||||
|
||||
### 6. Sub-agent Spawning
|
||||
Complex tasks spawn isolated sub-agents that:
|
||||
- Run in separate sessions
|
||||
- Have restricted tool access
|
||||
- Report back when complete
|
||||
- Can be monitored/killed
|
||||
|
||||
### 7. Multi-Layer Tool Policy
|
||||
Security through depth:
|
||||
- Global policy
|
||||
- Provider policy
|
||||
- Agent policy
|
||||
- Group policy
|
||||
- Sandbox policy
|
||||
- Sub-agent policy
|
||||
|
||||
---
|
||||
|
||||
## File Counts
|
||||
|
||||
| Category | Count |
|
||||
|----------|-------|
|
||||
| Compiled JS files | ~800 |
|
||||
| Documentation files | ~50 |
|
||||
| Skill packages | ~50 |
|
||||
| Channel plugins | ~12 |
|
||||
| Tool implementations | ~25 |
|
||||
|
||||
**Total package size:** ~1.5MB (minified JS + assets)
|
||||
|
||||
---
|
||||
|
||||
## Development
|
||||
|
||||
```bash
|
||||
# Clone and install
|
||||
git clone https://github.com/clawdbot/clawdbot
|
||||
cd clawdbot
|
||||
pnpm install
|
||||
|
||||
# Build
|
||||
pnpm build
|
||||
|
||||
# Run dev gateway
|
||||
pnpm gateway:dev
|
||||
|
||||
# Run tests
|
||||
pnpm test
|
||||
|
||||
# Lint
|
||||
pnpm lint
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- **GitHub:** https://github.com/clawdbot/clawdbot
|
||||
- **Docs:** https://docs.clawd.bot
|
||||
- **Discord:** https://discord.com/invite/clawd
|
||||
- **Skills Marketplace:** https://clawdhub.com
|
||||
|
||||
---
|
||||
|
||||
*Generated by Buba, 2026-02-06*
|
||||
@ -1,6 +1,6 @@
|
||||
{
|
||||
"version": 1,
|
||||
"lastUpdated": "2026-02-05T17:00:00Z",
|
||||
"lastUpdated": "2026-02-06T05:00:00Z",
|
||||
"updatedBy": "heartbeat-cron",
|
||||
"phases": [
|
||||
{
|
||||
|
||||
102
surya-blender/README.md
Normal file
102
surya-blender/README.md
Normal file
@ -0,0 +1,102 @@
|
||||
# SURYA - Blender Animations
|
||||
|
||||
Converted from the Manim animations for Das's debut album.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Install Blender (4.0+ recommended):
|
||||
```bash
|
||||
brew install --cask blender
|
||||
```
|
||||
|
||||
Or download from: https://www.blender.org/download/
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Generate all scenes and exports:
|
||||
```bash
|
||||
/Applications/Blender.app/Contents/MacOS/Blender --background --python generate_all.py
|
||||
```
|
||||
|
||||
This creates:
|
||||
- `surya_all.blend` - All animations in separate scenes
|
||||
- `exports/` folder with GLTF and Alembic files for each track
|
||||
|
||||
### Generate individual tracks:
|
||||
```bash
|
||||
# Track 1 - Skin (Morphing surfaces)
|
||||
/Applications/Blender.app/Contents/MacOS/Blender --background --python tracks/track01_skin.py
|
||||
|
||||
# Track 2 - U Saved Me (Sacred geometry)
|
||||
/Applications/Blender.app/Contents/MacOS/Blender --background --python tracks/track02_u_saved_me.py
|
||||
|
||||
# etc...
|
||||
```
|
||||
|
||||
### Export only:
|
||||
```bash
|
||||
# After generating, export all scenes
|
||||
/Applications/Blender.app/Contents/MacOS/Blender surya_all.blend --background --python export_gltf.py
|
||||
/Applications/Blender.app/Contents/MacOS/Blender surya_all.blend --background --python export_alembic.py
|
||||
```
|
||||
|
||||
## Tracks Included
|
||||
|
||||
| Track | Name | Animation |
|
||||
|-------|------|-----------|
|
||||
| 01 | Skin (Intro) | Morphing parametric surface |
|
||||
| 02 | U Saved Me | Particles → icosahedron |
|
||||
| 03 | Nothing | Sphere explosion |
|
||||
| 06 | Nature's Call | 3D fractal tree |
|
||||
| 08 | IDK | Lorenz attractor |
|
||||
| 09 | With U | Two orbiting souls + stars |
|
||||
| 14 | Hollow | Golden spiral to moon |
|
||||
|
||||
## Color Palette
|
||||
|
||||
```python
|
||||
COLORS = {
|
||||
"skin": (0.957, 0.447, 0.714), # #f472b6 Pink
|
||||
"u_saved_me": (0.133, 0.827, 0.933), # #22d3ee Cyan
|
||||
"nothing": (0.4, 0.4, 0.4), # #666666 Grey
|
||||
"natures_call": (0.369, 0.918, 0.831), # #5eead4 Teal
|
||||
"idk": (0.957, 0.447, 0.714), # #f472b6 Pink
|
||||
"with_u": (0.984, 0.749, 0.141), # #fbbf24 Gold
|
||||
"hollow": (0.984, 0.749, 0.141), # #fbbf24 Gold
|
||||
}
|
||||
```
|
||||
|
||||
## Animation Settings
|
||||
|
||||
- Frame rate: 30 FPS
|
||||
- Duration: ~750 frames (25 seconds) per track
|
||||
- Resolution: 1920x1080 (configurable)
|
||||
|
||||
## Files Structure
|
||||
|
||||
```
|
||||
surya-blender/
|
||||
├── README.md
|
||||
├── generate_all.py # Master script - generates everything
|
||||
├── export_gltf.py # Export all scenes as GLTF
|
||||
├── export_alembic.py # Export all scenes as Alembic
|
||||
├── utils.py # Shared utilities
|
||||
├── tracks/
|
||||
│ ├── track01_skin.py
|
||||
│ ├── track02_u_saved_me.py
|
||||
│ ├── track03_nothing.py
|
||||
│ ├── track06_natures_call.py
|
||||
│ ├── track08_idk.py
|
||||
│ ├── track09_with_u.py
|
||||
│ └── track14_hollow.py
|
||||
└── exports/ # Generated exports
|
||||
├── track01_skin.gltf
|
||||
├── track01_skin.abc
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- GLTF exports include animations and are web-compatible
|
||||
- Alembic (.abc) preserves geometry animation for compositing
|
||||
- Open `surya_all.blend` in Blender to preview/edit before exporting
|
||||
76
surya-blender/export_alembic.py
Normal file
76
surya-blender/export_alembic.py
Normal file
@ -0,0 +1,76 @@
|
||||
"""
|
||||
SURYA - Alembic Export Script
|
||||
Exports all scenes from surya_all.blend as individual Alembic (.abc) files.
|
||||
|
||||
Usage:
|
||||
/Applications/Blender.app/Contents/MacOS/Blender surya_all.blend --background --python export_alembic.py
|
||||
"""
|
||||
|
||||
import bpy
|
||||
import os
|
||||
|
||||
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
EXPORTS_DIR = os.path.join(SCRIPT_DIR, "exports")
|
||||
|
||||
def export_all_alembic():
|
||||
"""Export each scene as an Alembic file."""
|
||||
os.makedirs(EXPORTS_DIR, exist_ok=True)
|
||||
|
||||
print("=" * 60)
|
||||
print("SURYA - Alembic Export")
|
||||
print("=" * 60)
|
||||
|
||||
exported = []
|
||||
failed = []
|
||||
|
||||
for scene in bpy.data.scenes:
|
||||
scene_name = scene.name
|
||||
print(f"\nExporting: {scene_name}...")
|
||||
|
||||
# Set as active scene
|
||||
bpy.context.window.scene = scene
|
||||
|
||||
abc_path = os.path.join(EXPORTS_DIR, f"{scene_name}.abc")
|
||||
|
||||
try:
|
||||
bpy.ops.wm.alembic_export(
|
||||
filepath=abc_path,
|
||||
start=scene.frame_start,
|
||||
end=scene.frame_end,
|
||||
xsamples=1,
|
||||
gsamples=1,
|
||||
sh_open=0.0,
|
||||
sh_close=1.0,
|
||||
export_hair=False,
|
||||
export_particles=False,
|
||||
flatten=False,
|
||||
selected=False,
|
||||
export_normals=True,
|
||||
export_uvs=True,
|
||||
export_custom_properties=True,
|
||||
visible_objects_only=True,
|
||||
)
|
||||
print(f" ✓ Exported: {abc_path}")
|
||||
exported.append(scene_name)
|
||||
except Exception as e:
|
||||
print(f" ✗ Failed: {e}")
|
||||
failed.append((scene_name, str(e)))
|
||||
|
||||
# Summary
|
||||
print("\n" + "=" * 60)
|
||||
print("ALEMBIC EXPORT SUMMARY")
|
||||
print("=" * 60)
|
||||
print(f"\nSuccessful: {len(exported)}")
|
||||
for name in exported:
|
||||
print(f" ✓ {name}.abc")
|
||||
|
||||
if failed:
|
||||
print(f"\nFailed: {len(failed)}")
|
||||
for name, error in failed:
|
||||
print(f" ✗ {name}: {error}")
|
||||
|
||||
print(f"\nOutput directory: {EXPORTS_DIR}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
export_all_alembic()
|
||||
78
surya-blender/export_gltf.py
Normal file
78
surya-blender/export_gltf.py
Normal file
@ -0,0 +1,78 @@
|
||||
"""
|
||||
SURYA - GLTF Export Script
|
||||
Exports all scenes from surya_all.blend as individual GLTF files.
|
||||
|
||||
Usage:
|
||||
/Applications/Blender.app/Contents/MacOS/Blender surya_all.blend --background --python export_gltf.py
|
||||
"""
|
||||
|
||||
import bpy
|
||||
import os
|
||||
|
||||
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
EXPORTS_DIR = os.path.join(SCRIPT_DIR, "exports")
|
||||
|
||||
def export_all_gltf():
|
||||
"""Export each scene as a GLTF file."""
|
||||
os.makedirs(EXPORTS_DIR, exist_ok=True)
|
||||
|
||||
print("=" * 60)
|
||||
print("SURYA - GLTF Export")
|
||||
print("=" * 60)
|
||||
|
||||
exported = []
|
||||
failed = []
|
||||
|
||||
for scene in bpy.data.scenes:
|
||||
scene_name = scene.name
|
||||
print(f"\nExporting: {scene_name}...")
|
||||
|
||||
# Set as active scene
|
||||
bpy.context.window.scene = scene
|
||||
|
||||
# Deselect all, then select all visible
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
for obj in scene.objects:
|
||||
if obj.visible_get():
|
||||
obj.select_set(True)
|
||||
|
||||
gltf_path = os.path.join(EXPORTS_DIR, f"{scene_name}.gltf")
|
||||
|
||||
try:
|
||||
bpy.ops.export_scene.gltf(
|
||||
filepath=gltf_path,
|
||||
export_format='GLTF_SEPARATE',
|
||||
export_animations=True,
|
||||
export_apply=False,
|
||||
export_texcoords=True,
|
||||
export_normals=True,
|
||||
export_materials='EXPORT',
|
||||
use_selection=False,
|
||||
export_extras=False,
|
||||
export_yup=True,
|
||||
export_cameras=True,
|
||||
)
|
||||
print(f" ✓ Exported: {gltf_path}")
|
||||
exported.append(scene_name)
|
||||
except Exception as e:
|
||||
print(f" ✗ Failed: {e}")
|
||||
failed.append((scene_name, str(e)))
|
||||
|
||||
# Summary
|
||||
print("\n" + "=" * 60)
|
||||
print("GLTF EXPORT SUMMARY")
|
||||
print("=" * 60)
|
||||
print(f"\nSuccessful: {len(exported)}")
|
||||
for name in exported:
|
||||
print(f" ✓ {name}.gltf")
|
||||
|
||||
if failed:
|
||||
print(f"\nFailed: {len(failed)}")
|
||||
for name, error in failed:
|
||||
print(f" ✗ {name}: {error}")
|
||||
|
||||
print(f"\nOutput directory: {EXPORTS_DIR}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
export_all_gltf()
|
||||
174
surya-blender/generate_all.py
Normal file
174
surya-blender/generate_all.py
Normal file
@ -0,0 +1,174 @@
|
||||
"""
|
||||
SURYA - Master Generation Script
|
||||
Creates all track animations in a single Blender file with separate scenes,
|
||||
then exports each scene as GLTF and Alembic.
|
||||
|
||||
Usage:
|
||||
/Applications/Blender.app/Contents/MacOS/Blender --background --python generate_all.py
|
||||
"""
|
||||
|
||||
import bpy
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Get the directory of this script
|
||||
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
sys.path.insert(0, SCRIPT_DIR)
|
||||
sys.path.insert(0, os.path.join(SCRIPT_DIR, "tracks"))
|
||||
|
||||
from utils import *
|
||||
|
||||
# Import track modules
|
||||
from tracks import track01_skin
|
||||
from tracks import track02_u_saved_me
|
||||
from tracks import track03_nothing
|
||||
from tracks import track06_natures_call
|
||||
from tracks import track08_idk
|
||||
from tracks import track09_with_u
|
||||
from tracks import track14_hollow
|
||||
|
||||
|
||||
# Track configurations
|
||||
TRACKS = [
|
||||
("Track01_Skin", track01_skin.create_skin_animation),
|
||||
("Track02_USavedMe", track02_u_saved_me.create_sacred_geometry_animation),
|
||||
("Track03_Nothing", track03_nothing.create_nothing_animation),
|
||||
("Track06_NaturesCall", track06_natures_call.create_natures_call_animation),
|
||||
("Track08_IDK", track08_idk.create_idk_animation),
|
||||
("Track09_WithU", track09_with_u.create_with_u_animation),
|
||||
("Track14_Hollow", track14_hollow.create_hollow_animation),
|
||||
]
|
||||
|
||||
|
||||
def generate_all_scenes():
|
||||
"""Generate all track scenes in a single Blender file."""
|
||||
print("=" * 60)
|
||||
print("SURYA - Blender Animation Generator")
|
||||
print("=" * 60)
|
||||
|
||||
# Delete the default scene
|
||||
if "Scene" in bpy.data.scenes:
|
||||
default_scene = bpy.data.scenes["Scene"]
|
||||
if len(bpy.data.scenes) > 1:
|
||||
bpy.data.scenes.remove(default_scene)
|
||||
|
||||
# Create each track scene
|
||||
for track_name, create_func in TRACKS:
|
||||
print(f"\nGenerating: {track_name}...")
|
||||
|
||||
# Create new scene
|
||||
scene = bpy.data.scenes.new(track_name)
|
||||
bpy.context.window.scene = scene
|
||||
|
||||
# Setup scene defaults
|
||||
scene.frame_start = 1
|
||||
scene.frame_end = TOTAL_FRAMES
|
||||
scene.render.fps = FPS
|
||||
scene.render.resolution_x = 1920
|
||||
scene.render.resolution_y = 1080
|
||||
|
||||
# Create world for this scene
|
||||
world = bpy.data.worlds.new(f"{track_name}_World")
|
||||
scene.world = world
|
||||
world.use_nodes = True
|
||||
|
||||
# Run the track creation function
|
||||
try:
|
||||
create_func()
|
||||
print(f" ✓ {track_name} created successfully")
|
||||
except Exception as e:
|
||||
print(f" ✗ Error creating {track_name}: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
# Save the master blend file
|
||||
output_path = os.path.join(SCRIPT_DIR, "surya_all.blend")
|
||||
bpy.ops.wm.save_as_mainfile(filepath=output_path)
|
||||
print(f"\n✓ Saved master file: {output_path}")
|
||||
|
||||
return output_path
|
||||
|
||||
|
||||
def export_all_scenes():
|
||||
"""Export each scene as GLTF and Alembic."""
|
||||
exports_dir = os.path.join(SCRIPT_DIR, "exports")
|
||||
os.makedirs(exports_dir, exist_ok=True)
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Exporting scenes...")
|
||||
print("=" * 60)
|
||||
|
||||
for scene in bpy.data.scenes:
|
||||
scene_name = scene.name
|
||||
print(f"\nExporting: {scene_name}...")
|
||||
|
||||
# Set as active scene
|
||||
bpy.context.window.scene = scene
|
||||
|
||||
# Select all objects in scene
|
||||
for obj in scene.objects:
|
||||
obj.select_set(True)
|
||||
|
||||
# Export GLTF
|
||||
gltf_path = os.path.join(exports_dir, f"{scene_name}.gltf")
|
||||
try:
|
||||
bpy.ops.export_scene.gltf(
|
||||
filepath=gltf_path,
|
||||
export_format='GLTF_SEPARATE',
|
||||
export_animations=True,
|
||||
export_apply=False,
|
||||
export_texcoords=True,
|
||||
export_normals=True,
|
||||
export_materials='EXPORT',
|
||||
use_selection=False,
|
||||
export_extras=False,
|
||||
export_yup=True,
|
||||
)
|
||||
print(f" ✓ GLTF: {gltf_path}")
|
||||
except Exception as e:
|
||||
print(f" ✗ GLTF export failed: {e}")
|
||||
|
||||
# Export Alembic
|
||||
abc_path = os.path.join(exports_dir, f"{scene_name}.abc")
|
||||
try:
|
||||
bpy.ops.wm.alembic_export(
|
||||
filepath=abc_path,
|
||||
start=scene.frame_start,
|
||||
end=scene.frame_end,
|
||||
export_hair=False,
|
||||
export_particles=False,
|
||||
flatten=False,
|
||||
selected=False,
|
||||
export_normals=True,
|
||||
export_uvs=True,
|
||||
)
|
||||
print(f" ✓ Alembic: {abc_path}")
|
||||
except Exception as e:
|
||||
print(f" ✗ Alembic export failed: {e}")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Export complete!")
|
||||
print("=" * 60)
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point."""
|
||||
# Generate all scenes
|
||||
blend_path = generate_all_scenes()
|
||||
|
||||
# Export all scenes
|
||||
export_all_scenes()
|
||||
|
||||
# Print summary
|
||||
print("\n" + "=" * 60)
|
||||
print("SURYA GENERATION COMPLETE")
|
||||
print("=" * 60)
|
||||
print(f"\nBlend file: {blend_path}")
|
||||
print(f"Exports: {os.path.join(SCRIPT_DIR, 'exports')}/")
|
||||
print("\nGenerated tracks:")
|
||||
for track_name, _ in TRACKS:
|
||||
print(f" - {track_name}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
1
surya-blender/tracks/__init__.py
Normal file
1
surya-blender/tracks/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
# SURYA Track Modules
|
||||
155
surya-blender/tracks/track01_skin.py
Normal file
155
surya-blender/tracks/track01_skin.py
Normal file
@ -0,0 +1,155 @@
|
||||
"""
|
||||
Track 01: SKIN (INTRO) - Morphing Parametric Surface
|
||||
"""
|
||||
|
||||
import bpy
|
||||
import bmesh
|
||||
import math
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add parent directory to path for utils
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from utils import *
|
||||
|
||||
|
||||
def create_parametric_surface(u_segments=40, v_segments=40, t=0):
|
||||
"""Create a morphing skin-like parametric surface."""
|
||||
mesh = bpy.data.meshes.new("SkinSurface")
|
||||
obj = bpy.data.objects.new("SkinSurface", mesh)
|
||||
bpy.context.collection.objects.link(obj)
|
||||
|
||||
bm = bmesh.new()
|
||||
|
||||
# Generate vertices
|
||||
verts = []
|
||||
for i in range(u_segments + 1):
|
||||
u = i / u_segments * math.pi
|
||||
row = []
|
||||
for j in range(v_segments + 1):
|
||||
v = j / v_segments * 2 * math.pi
|
||||
|
||||
# Parametric equations with time-based morphing
|
||||
x = math.sin(u) * math.cos(v) * (1 + 0.2 * math.sin(3 * u + t))
|
||||
y = math.sin(u) * math.sin(v) * (1 + 0.2 * math.cos(3 * v + t))
|
||||
z = math.cos(u) + 0.3 * math.sin(3 * v + 2 * u + t)
|
||||
|
||||
vert = bm.verts.new((x * 2, y * 2, z * 2))
|
||||
row.append(vert)
|
||||
verts.append(row)
|
||||
|
||||
bm.verts.ensure_lookup_table()
|
||||
|
||||
# Create faces
|
||||
for i in range(u_segments):
|
||||
for j in range(v_segments):
|
||||
v1 = verts[i][j]
|
||||
v2 = verts[i][j + 1]
|
||||
v3 = verts[i + 1][j + 1]
|
||||
v4 = verts[i + 1][j]
|
||||
try:
|
||||
bm.faces.new([v1, v2, v3, v4])
|
||||
except:
|
||||
pass
|
||||
|
||||
bm.to_mesh(mesh)
|
||||
bm.free()
|
||||
|
||||
# Add subdivision and smooth shading
|
||||
bpy.context.view_layer.objects.active = obj
|
||||
bpy.ops.object.shade_smooth()
|
||||
|
||||
return obj
|
||||
|
||||
|
||||
def create_skin_animation():
|
||||
"""Create the full Track 01 animation."""
|
||||
clear_scene()
|
||||
setup_scene(background_color=COLORS["intro"])
|
||||
|
||||
# Create camera
|
||||
camera = create_camera(location=(0, -8, 4), rotation=(1.2, 0, 0))
|
||||
animate_camera_orbit(camera, center=(0, 0, 0), radius=8, height=4,
|
||||
start_frame=1, end_frame=TOTAL_FRAMES, revolutions=0.5)
|
||||
|
||||
# Create the morphing surface using shape keys
|
||||
base_surface = create_parametric_surface(t=0)
|
||||
|
||||
# Add material
|
||||
mat = create_emission_material("SkinMaterial", COLORS["skin"], strength=1.5)
|
||||
base_surface.data.materials.append(mat)
|
||||
|
||||
# Create shape keys for morphing animation
|
||||
base_surface.shape_key_add(name="Basis")
|
||||
|
||||
# Create morph targets at different time values
|
||||
morph_frames = [1, 125, 250, 375, 500, 625, 750]
|
||||
|
||||
for idx, frame in enumerate(morph_frames):
|
||||
t = idx * math.pi / 2
|
||||
|
||||
# Create temporary mesh for this morph state
|
||||
temp = create_parametric_surface(t=t)
|
||||
|
||||
# Add shape key from temp mesh
|
||||
sk = base_surface.shape_key_add(name=f"Morph_{idx}")
|
||||
|
||||
# Copy vertex positions
|
||||
for i, vert in enumerate(temp.data.vertices):
|
||||
if i < len(sk.data):
|
||||
sk.data[i].co = vert.co
|
||||
|
||||
# Delete temp object
|
||||
bpy.data.objects.remove(temp)
|
||||
|
||||
# Animate shape keys
|
||||
for idx, frame in enumerate(morph_frames):
|
||||
for sk_idx in range(1, len(base_surface.data.shape_keys.key_blocks)):
|
||||
sk = base_surface.data.shape_keys.key_blocks[sk_idx]
|
||||
|
||||
# Set value: 1.0 at matching frame, 0.0 at others
|
||||
if sk_idx == idx + 1:
|
||||
sk.value = 1.0
|
||||
else:
|
||||
sk.value = 0.0
|
||||
sk.keyframe_insert(data_path="value", frame=frame)
|
||||
|
||||
# Add gentle rotation
|
||||
for frame in range(1, TOTAL_FRAMES + 1, 10):
|
||||
t = frame / TOTAL_FRAMES
|
||||
base_surface.rotation_euler = (0, 0, t * math.pi * 0.5)
|
||||
base_surface.keyframe_insert(data_path="rotation_euler", frame=frame)
|
||||
|
||||
# Add entrance and exit animations
|
||||
keyframe_scale(base_surface, 1, 0.01)
|
||||
keyframe_scale(base_surface, 90, 1.0)
|
||||
keyframe_scale(base_surface, TOTAL_FRAMES - 30, 1.0)
|
||||
keyframe_scale(base_surface, TOTAL_FRAMES, 0.01)
|
||||
|
||||
return base_surface
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_skin_animation()
|
||||
|
||||
# Save the blend file
|
||||
output_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
bpy.ops.wm.save_as_mainfile(filepath=os.path.join(output_dir, "exports", "track01_skin.blend"))
|
||||
|
||||
# Export GLTF
|
||||
bpy.ops.export_scene.gltf(
|
||||
filepath=os.path.join(output_dir, "exports", "track01_skin.gltf"),
|
||||
export_animations=True,
|
||||
export_format='GLTF_SEPARATE'
|
||||
)
|
||||
|
||||
# Export Alembic
|
||||
bpy.ops.wm.alembic_export(
|
||||
filepath=os.path.join(output_dir, "exports", "track01_skin.abc"),
|
||||
start=1,
|
||||
end=TOTAL_FRAMES,
|
||||
export_hair=False,
|
||||
export_particles=False
|
||||
)
|
||||
|
||||
print("Track 01 - Skin: Export complete!")
|
||||
170
surya-blender/tracks/track02_u_saved_me.py
Normal file
170
surya-blender/tracks/track02_u_saved_me.py
Normal file
@ -0,0 +1,170 @@
|
||||
"""
|
||||
Track 02: U SAVED ME - Particles Coalescing into Sacred Geometry (Icosahedron)
|
||||
"""
|
||||
|
||||
import bpy
|
||||
import math
|
||||
import random
|
||||
import sys
|
||||
import os
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from utils import *
|
||||
|
||||
|
||||
def get_icosahedron_vertices(scale=2.5):
|
||||
"""Get icosahedron vertex positions."""
|
||||
phi = (1 + math.sqrt(5)) / 2 # Golden ratio
|
||||
|
||||
verts = [
|
||||
[0, 1, phi], [0, -1, phi], [0, 1, -phi], [0, -1, -phi],
|
||||
[1, phi, 0], [-1, phi, 0], [1, -phi, 0], [-1, -phi, 0],
|
||||
[phi, 0, 1], [-phi, 0, 1], [phi, 0, -1], [-phi, 0, -1]
|
||||
]
|
||||
|
||||
# Normalize and scale
|
||||
result = []
|
||||
for v in verts:
|
||||
norm = math.sqrt(sum(x*x for x in v))
|
||||
result.append([x / norm * scale for x in v])
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def get_icosahedron_edges():
|
||||
"""Get icosahedron edge pairs."""
|
||||
return [
|
||||
(0, 1), (0, 4), (0, 5), (0, 8), (0, 9),
|
||||
(1, 6), (1, 7), (1, 8), (1, 9),
|
||||
(2, 3), (2, 4), (2, 5), (2, 10), (2, 11),
|
||||
(3, 6), (3, 7), (3, 10), (3, 11),
|
||||
(4, 5), (4, 8), (4, 10),
|
||||
(5, 9), (5, 11),
|
||||
(6, 7), (6, 8), (6, 10),
|
||||
(7, 9), (7, 11),
|
||||
(8, 10), (9, 11)
|
||||
]
|
||||
|
||||
|
||||
def create_sacred_geometry_animation():
|
||||
"""Create the full Track 02 animation."""
|
||||
clear_scene()
|
||||
setup_scene(background_color=(0.04, 0.08, 0.16, 1.0)) # Dark blue
|
||||
|
||||
# Create camera
|
||||
camera = create_camera(location=(0, -12, 6), rotation=(1.1, 0, 0))
|
||||
animate_camera_orbit(camera, center=(0, 0, 0), radius=12, height=5,
|
||||
start_frame=1, end_frame=TOTAL_FRAMES, revolutions=0.4)
|
||||
|
||||
# Get target positions (icosahedron vertices)
|
||||
target_positions = get_icosahedron_vertices(scale=2.5)
|
||||
|
||||
# Add midpoints for more particles
|
||||
for i, v1 in enumerate(target_positions):
|
||||
for v2 in target_positions[i+1:]:
|
||||
mid = [(v1[j] + v2[j]) / 2 for j in range(3)]
|
||||
norm = math.sqrt(sum(x*x for x in mid))
|
||||
if norm > 0.5:
|
||||
target_positions.append([x / norm * 2.5 for x in mid])
|
||||
|
||||
# Limit to 60 particles
|
||||
target_positions = target_positions[:60]
|
||||
|
||||
# Create particles
|
||||
particles = []
|
||||
mat = create_emission_material("ParticleMat", COLORS["u_saved_me"], strength=3.0)
|
||||
|
||||
for i, target in enumerate(target_positions):
|
||||
# Random start position (scattered)
|
||||
start = [random.uniform(-8, 8) for _ in range(3)]
|
||||
|
||||
bpy.ops.mesh.primitive_ico_sphere_add(radius=0.12, subdivisions=2, location=start)
|
||||
particle = bpy.context.active_object
|
||||
particle.name = f"Particle_{i:03d}"
|
||||
particle.data.materials.append(mat)
|
||||
particles.append((particle, start, target))
|
||||
|
||||
# Animate particles coalescing
|
||||
coalesce_start = 60
|
||||
coalesce_end = 300
|
||||
|
||||
for particle, start, target in particles:
|
||||
# Start position
|
||||
keyframe_location(particle, 1, start)
|
||||
keyframe_location(particle, coalesce_start, start)
|
||||
|
||||
# End position (at icosahedron vertex)
|
||||
keyframe_location(particle, coalesce_end, target)
|
||||
keyframe_location(particle, TOTAL_FRAMES - 30, target)
|
||||
|
||||
# Set easing
|
||||
if particle.animation_data:
|
||||
for fc in particle.animation_data.action.fcurves:
|
||||
for kf in fc.keyframe_points:
|
||||
kf.interpolation = 'BEZIER'
|
||||
kf.easing = 'EASE_IN_OUT'
|
||||
|
||||
# Create edges after particles arrive
|
||||
edges = []
|
||||
edge_pairs = get_icosahedron_edges()
|
||||
edge_mat = create_emission_material("EdgeMat", COLORS["u_saved_me"], strength=1.5)
|
||||
|
||||
verts = get_icosahedron_vertices(scale=2.5)
|
||||
|
||||
for i, (v1_idx, v2_idx) in enumerate(edge_pairs):
|
||||
v1 = verts[v1_idx]
|
||||
v2 = verts[v2_idx]
|
||||
|
||||
# Create curve for edge
|
||||
curve_data = bpy.data.curves.new(name=f"Edge_{i}", type='CURVE')
|
||||
curve_data.dimensions = '3D'
|
||||
curve_data.bevel_depth = 0.02
|
||||
|
||||
spline = curve_data.splines.new('BEZIER')
|
||||
spline.bezier_points.add(1)
|
||||
spline.bezier_points[0].co = v1
|
||||
spline.bezier_points[1].co = v2
|
||||
|
||||
curve_obj = bpy.data.objects.new(f"Edge_{i}", curve_data)
|
||||
bpy.context.collection.objects.link(curve_obj)
|
||||
curve_obj.data.materials.append(edge_mat)
|
||||
|
||||
# Animate edge appearance (bevel depth)
|
||||
curve_data.bevel_depth = 0.0
|
||||
curve_data.keyframe_insert(data_path="bevel_depth", frame=coalesce_end)
|
||||
curve_data.keyframe_insert(data_path="bevel_depth", frame=coalesce_end + 30)
|
||||
|
||||
curve_data.bevel_depth = 0.02
|
||||
curve_data.keyframe_insert(data_path="bevel_depth", frame=coalesce_end + 90)
|
||||
|
||||
edges.append(curve_obj)
|
||||
|
||||
# Pulse effect (scale all particles)
|
||||
pulse_frames = [400, 450, 500, 550]
|
||||
for frame in pulse_frames:
|
||||
for particle, _, _ in particles:
|
||||
keyframe_scale(particle, frame, 1.0)
|
||||
keyframe_scale(particle, frame + 15, 1.4)
|
||||
keyframe_scale(particle, frame + 30, 1.0)
|
||||
|
||||
return particles, edges
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_sacred_geometry_animation()
|
||||
|
||||
output_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
bpy.ops.wm.save_as_mainfile(filepath=os.path.join(output_dir, "exports", "track02_u_saved_me.blend"))
|
||||
|
||||
bpy.ops.export_scene.gltf(
|
||||
filepath=os.path.join(output_dir, "exports", "track02_u_saved_me.gltf"),
|
||||
export_animations=True,
|
||||
export_format='GLTF_SEPARATE'
|
||||
)
|
||||
|
||||
bpy.ops.wm.alembic_export(
|
||||
filepath=os.path.join(output_dir, "exports", "track02_u_saved_me.abc"),
|
||||
start=1, end=TOTAL_FRAMES
|
||||
)
|
||||
|
||||
print("Track 02 - U Saved Me: Export complete!")
|
||||
150
surya-blender/tracks/track03_nothing.py
Normal file
150
surya-blender/tracks/track03_nothing.py
Normal file
@ -0,0 +1,150 @@
|
||||
"""
|
||||
Track 03: NOTHING - Sphere Explosion/Fragmentation into Void
|
||||
"""
|
||||
|
||||
import bpy
|
||||
import bmesh
|
||||
import math
|
||||
import random
|
||||
import sys
|
||||
import os
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from utils import *
|
||||
|
||||
|
||||
def create_fragments(count=60, sphere_radius=2.0):
|
||||
"""Create fragment particles from a sphere."""
|
||||
fragments = []
|
||||
mat = create_emission_material("FragmentMat", COLORS["nothing"], strength=1.0)
|
||||
|
||||
for i in range(count):
|
||||
# Random position within sphere
|
||||
theta = random.uniform(0, 2 * math.pi)
|
||||
phi = random.uniform(0, math.pi)
|
||||
r = random.uniform(0, sphere_radius * 0.9)
|
||||
|
||||
x = r * math.sin(phi) * math.cos(theta)
|
||||
y = r * math.sin(phi) * math.sin(theta)
|
||||
z = r * math.cos(phi)
|
||||
|
||||
size = random.uniform(0.15, 0.4)
|
||||
|
||||
bpy.ops.mesh.primitive_ico_sphere_add(
|
||||
radius=size,
|
||||
subdivisions=1,
|
||||
location=(x, y, z)
|
||||
)
|
||||
frag = bpy.context.active_object
|
||||
frag.name = f"Fragment_{i:03d}"
|
||||
frag.data.materials.append(mat)
|
||||
|
||||
# Store explosion direction
|
||||
if r > 0.1:
|
||||
direction = (x/r, y/r, z/r)
|
||||
else:
|
||||
direction = (random.uniform(-1, 1), random.uniform(-1, 1), random.uniform(-1, 1))
|
||||
norm = math.sqrt(sum(d*d for d in direction))
|
||||
direction = tuple(d/norm for d in direction)
|
||||
|
||||
fragments.append((frag, (x, y, z), direction))
|
||||
|
||||
return fragments
|
||||
|
||||
|
||||
def create_nothing_animation():
|
||||
"""Create the full Track 03 animation."""
|
||||
clear_scene()
|
||||
setup_scene(background_color=(0.04, 0.04, 0.04, 1.0)) # Near black
|
||||
|
||||
# Create camera
|
||||
camera = create_camera(location=(0, -10, 4), rotation=(1.15, 0, 0))
|
||||
animate_camera_orbit(camera, center=(0, 0, 0), radius=10, height=4,
|
||||
start_frame=1, end_frame=TOTAL_FRAMES, revolutions=0.3)
|
||||
|
||||
# Create initial sphere
|
||||
sphere = create_sphere(location=(0, 0, 0), radius=2.0, segments=32, rings=16, name="MainSphere")
|
||||
sphere_mat = create_emission_material("SphereMat", COLORS["nothing"], strength=1.2)
|
||||
sphere.data.materials.append(sphere_mat)
|
||||
|
||||
# Animate sphere appearance
|
||||
keyframe_scale(sphere, 1, 0.01)
|
||||
keyframe_scale(sphere, 60, 1.0)
|
||||
keyframe_scale(sphere, 90, 1.0)
|
||||
|
||||
# Explosion at frame 120
|
||||
explosion_frame = 120
|
||||
|
||||
# Sphere disappears at explosion
|
||||
keyframe_scale(sphere, explosion_frame - 1, 1.0)
|
||||
keyframe_scale(sphere, explosion_frame, 0.01)
|
||||
|
||||
# Create fragments
|
||||
fragments = create_fragments(count=60, sphere_radius=2.0)
|
||||
|
||||
# Animate fragments
|
||||
for frag, start_pos, direction in fragments:
|
||||
# Start invisible
|
||||
keyframe_scale(frag, 1, 0.01)
|
||||
keyframe_scale(frag, explosion_frame - 1, 0.01)
|
||||
|
||||
# Appear at explosion
|
||||
keyframe_scale(frag, explosion_frame, 1.0)
|
||||
keyframe_location(frag, explosion_frame, start_pos)
|
||||
|
||||
# Drift outward into void
|
||||
drift_distance = random.uniform(6, 12)
|
||||
end_pos = tuple(start_pos[i] + direction[i] * drift_distance for i in range(3))
|
||||
|
||||
# Animate drift
|
||||
keyframe_location(frag, TOTAL_FRAMES, end_pos)
|
||||
|
||||
# Fade out (scale down)
|
||||
keyframe_scale(frag, TOTAL_FRAMES - 60, 0.6)
|
||||
keyframe_scale(frag, TOTAL_FRAMES, 0.1)
|
||||
|
||||
# Set linear interpolation for drift
|
||||
if frag.animation_data:
|
||||
for fc in frag.animation_data.action.fcurves:
|
||||
for kf in fc.keyframe_points:
|
||||
kf.interpolation = 'LINEAR'
|
||||
|
||||
# Add some ambient dust particles
|
||||
dust = []
|
||||
dust_mat = create_emission_material("DustMat", COLORS["nothing"], strength=0.3)
|
||||
|
||||
for i in range(40):
|
||||
pos = [random.uniform(-6, 6) for _ in range(3)]
|
||||
bpy.ops.mesh.primitive_ico_sphere_add(radius=0.05, subdivisions=0, location=pos)
|
||||
d = bpy.context.active_object
|
||||
d.name = f"Dust_{i:03d}"
|
||||
d.data.materials.append(dust_mat)
|
||||
|
||||
# Slow random drift
|
||||
end_pos = [pos[j] + random.uniform(-2, 2) for j in range(3)]
|
||||
keyframe_location(d, 1, pos)
|
||||
keyframe_location(d, TOTAL_FRAMES, end_pos)
|
||||
|
||||
dust.append(d)
|
||||
|
||||
return sphere, fragments, dust
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_nothing_animation()
|
||||
|
||||
output_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
bpy.ops.wm.save_as_mainfile(filepath=os.path.join(output_dir, "exports", "track03_nothing.blend"))
|
||||
|
||||
bpy.ops.export_scene.gltf(
|
||||
filepath=os.path.join(output_dir, "exports", "track03_nothing.gltf"),
|
||||
export_animations=True,
|
||||
export_format='GLTF_SEPARATE'
|
||||
)
|
||||
|
||||
bpy.ops.wm.alembic_export(
|
||||
filepath=os.path.join(output_dir, "exports", "track03_nothing.abc"),
|
||||
start=1, end=TOTAL_FRAMES
|
||||
)
|
||||
|
||||
print("Track 03 - Nothing: Export complete!")
|
||||
186
surya-blender/tracks/track06_natures_call.py
Normal file
186
surya-blender/tracks/track06_natures_call.py
Normal file
@ -0,0 +1,186 @@
|
||||
"""
|
||||
Track 06: NATURE'S CALL - 3D Fractal Tree (L-System)
|
||||
"""
|
||||
|
||||
import bpy
|
||||
import math
|
||||
import random
|
||||
import sys
|
||||
import os
|
||||
from mathutils import Vector, Matrix
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from utils import *
|
||||
|
||||
|
||||
def create_branch(start, end, thickness=0.05, name="Branch"):
|
||||
"""Create a cylindrical branch between two points."""
|
||||
direction = Vector(end) - Vector(start)
|
||||
length = direction.length
|
||||
|
||||
if length < 0.01:
|
||||
return None
|
||||
|
||||
# Create cylinder
|
||||
bpy.ops.mesh.primitive_cylinder_add(
|
||||
radius=thickness,
|
||||
depth=length,
|
||||
location=(0, 0, 0)
|
||||
)
|
||||
branch = bpy.context.active_object
|
||||
branch.name = name
|
||||
|
||||
# Position and orient
|
||||
mid = [(start[i] + end[i]) / 2 for i in range(3)]
|
||||
branch.location = mid
|
||||
|
||||
# Rotate to align with direction
|
||||
up = Vector((0, 0, 1))
|
||||
direction.normalize()
|
||||
|
||||
rotation_axis = up.cross(direction)
|
||||
if rotation_axis.length > 0.0001:
|
||||
rotation_axis.normalize()
|
||||
angle = math.acos(max(-1, min(1, up.dot(direction))))
|
||||
branch.rotation_mode = 'AXIS_ANGLE'
|
||||
branch.rotation_axis_angle = (angle, rotation_axis.x, rotation_axis.y, rotation_axis.z)
|
||||
|
||||
return branch
|
||||
|
||||
|
||||
def generate_tree_recursive(start, direction, length, depth, max_depth, branches, leaves):
|
||||
"""Recursively generate tree branches."""
|
||||
if depth > max_depth or length < 0.1:
|
||||
# Add leaf
|
||||
leaves.append(start)
|
||||
return
|
||||
|
||||
end = [start[i] + direction[i] * length for i in range(3)]
|
||||
branches.append((start, end, depth))
|
||||
|
||||
# Generate child branches
|
||||
angles = [math.pi/5, -math.pi/5, math.pi/6, -math.pi/6]
|
||||
|
||||
for angle in angles:
|
||||
if random.random() > 0.35:
|
||||
# Rotate direction around random axis
|
||||
rot_axis = (random.uniform(-1, 1), random.uniform(-1, 1), 0)
|
||||
norm = math.sqrt(sum(r*r for r in rot_axis))
|
||||
if norm > 0.01:
|
||||
rot_axis = tuple(r/norm for r in rot_axis)
|
||||
else:
|
||||
rot_axis = (1, 0, 0)
|
||||
|
||||
# Simple rotation (approximate)
|
||||
new_dir = [
|
||||
direction[0] * math.cos(angle) + rot_axis[0] * (1 - math.cos(angle)),
|
||||
direction[1] * math.cos(angle) + rot_axis[1] * (1 - math.cos(angle)),
|
||||
direction[2] * math.cos(angle) + math.sin(angle) * 0.3 + 0.3
|
||||
]
|
||||
|
||||
# Normalize
|
||||
norm = math.sqrt(sum(d*d for d in new_dir))
|
||||
new_dir = [d/norm for d in new_dir]
|
||||
|
||||
generate_tree_recursive(end, new_dir, length * 0.7, depth + 1, max_depth, branches, leaves)
|
||||
|
||||
|
||||
def create_natures_call_animation():
|
||||
"""Create the full Track 06 animation."""
|
||||
clear_scene()
|
||||
setup_scene(background_color=(0.02, 0.11, 0.09, 1.0)) # Forest dark green
|
||||
|
||||
# Create camera
|
||||
camera = create_camera(location=(0, -15, 5), rotation=(1.2, 0, 0))
|
||||
animate_camera_orbit(camera, center=(0, 0, 2), radius=15, height=5,
|
||||
start_frame=1, end_frame=TOTAL_FRAMES, revolutions=0.3)
|
||||
|
||||
# Generate tree structure
|
||||
branches = []
|
||||
leaves = []
|
||||
|
||||
start = (0, 0, -4)
|
||||
direction = (0, 0, 1)
|
||||
generate_tree_recursive(start, direction, 2.0, 0, 5, branches, leaves)
|
||||
|
||||
# Create branch objects with progressive animation
|
||||
branch_mat = create_emission_material("BranchMat", COLORS["natures_call"], strength=1.5)
|
||||
branch_objects = []
|
||||
|
||||
total_branches = len(branches)
|
||||
frames_per_branch = max(1, 400 // total_branches)
|
||||
|
||||
for i, (start_pos, end_pos, depth) in enumerate(branches):
|
||||
thickness = max(0.02, 0.1 - depth * 0.015)
|
||||
|
||||
# Create curve instead of cylinder for smoother look
|
||||
curve_data = bpy.data.curves.new(name=f"Branch_{i}", type='CURVE')
|
||||
curve_data.dimensions = '3D'
|
||||
curve_data.bevel_depth = thickness
|
||||
curve_data.bevel_resolution = 4
|
||||
|
||||
spline = curve_data.splines.new('BEZIER')
|
||||
spline.bezier_points.add(1)
|
||||
spline.bezier_points[0].co = start_pos
|
||||
spline.bezier_points[0].handle_right = [start_pos[j] + (end_pos[j] - start_pos[j]) * 0.3 for j in range(3)]
|
||||
spline.bezier_points[0].handle_left = start_pos
|
||||
spline.bezier_points[1].co = end_pos
|
||||
spline.bezier_points[1].handle_left = [end_pos[j] - (end_pos[j] - start_pos[j]) * 0.3 for j in range(3)]
|
||||
spline.bezier_points[1].handle_right = end_pos
|
||||
|
||||
branch_obj = bpy.data.objects.new(f"Branch_{i}", curve_data)
|
||||
bpy.context.collection.objects.link(branch_obj)
|
||||
branch_obj.data.materials.append(branch_mat)
|
||||
|
||||
# Animate growth (bevel depth)
|
||||
appear_frame = 30 + i * frames_per_branch
|
||||
|
||||
curve_data.bevel_depth = 0.0
|
||||
curve_data.keyframe_insert(data_path="bevel_depth", frame=1)
|
||||
curve_data.keyframe_insert(data_path="bevel_depth", frame=appear_frame)
|
||||
|
||||
curve_data.bevel_depth = thickness
|
||||
curve_data.keyframe_insert(data_path="bevel_depth", frame=appear_frame + 20)
|
||||
|
||||
branch_objects.append(branch_obj)
|
||||
|
||||
# Create leaves
|
||||
leaf_mat = create_emission_material("LeafMat", (0.133, 0.773, 0.333, 1.0), strength=2.0)
|
||||
leaf_objects = []
|
||||
|
||||
leaves_appear_frame = 450
|
||||
|
||||
for i, pos in enumerate(leaves[:80]): # Limit leaves
|
||||
bpy.ops.mesh.primitive_ico_sphere_add(radius=0.12, subdivisions=1, location=pos)
|
||||
leaf = bpy.context.active_object
|
||||
leaf.name = f"Leaf_{i:03d}"
|
||||
leaf.data.materials.append(leaf_mat)
|
||||
|
||||
# Animate leaf appearance
|
||||
keyframe_scale(leaf, 1, 0.01)
|
||||
keyframe_scale(leaf, leaves_appear_frame + i * 2, 0.01)
|
||||
keyframe_scale(leaf, leaves_appear_frame + i * 2 + 30, 1.0)
|
||||
|
||||
leaf_objects.append(leaf)
|
||||
|
||||
return branch_objects, leaf_objects
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_natures_call_animation()
|
||||
|
||||
output_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
bpy.ops.wm.save_as_mainfile(filepath=os.path.join(output_dir, "exports", "track06_natures_call.blend"))
|
||||
|
||||
bpy.ops.export_scene.gltf(
|
||||
filepath=os.path.join(output_dir, "exports", "track06_natures_call.gltf"),
|
||||
export_animations=True,
|
||||
export_format='GLTF_SEPARATE'
|
||||
)
|
||||
|
||||
bpy.ops.wm.alembic_export(
|
||||
filepath=os.path.join(output_dir, "exports", "track06_natures_call.abc"),
|
||||
start=1, end=TOTAL_FRAMES
|
||||
)
|
||||
|
||||
print("Track 06 - Nature's Call: Export complete!")
|
||||
180
surya-blender/tracks/track08_idk.py
Normal file
180
surya-blender/tracks/track08_idk.py
Normal file
@ -0,0 +1,180 @@
|
||||
"""
|
||||
Track 08: IDK - Lorenz Attractor with Trailing Particle
|
||||
"""
|
||||
|
||||
import bpy
|
||||
import math
|
||||
import sys
|
||||
import os
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from utils import *
|
||||
|
||||
|
||||
def generate_lorenz_points(num_points=8000, scale=0.12):
|
||||
"""Generate points along a Lorenz attractor."""
|
||||
sigma, rho, beta = 10, 28, 8/3
|
||||
dt = 0.005
|
||||
|
||||
x, y, z = 0.1, 0, 0
|
||||
points = []
|
||||
|
||||
for _ in range(num_points):
|
||||
dx = sigma * (y - x) * dt
|
||||
dy = (x * (rho - z) - y) * dt
|
||||
dz = (x * y - beta * z) * dt
|
||||
|
||||
x += dx
|
||||
y += dy
|
||||
z += dz
|
||||
|
||||
points.append((x * scale, y * scale, (z - 25) * scale))
|
||||
|
||||
return points
|
||||
|
||||
|
||||
def create_idk_animation():
|
||||
"""Create the full Track 08 animation."""
|
||||
clear_scene()
|
||||
setup_scene(background_color=(0.12, 0.04, 0.18, 1.0)) # Dark purple
|
||||
|
||||
# Create camera
|
||||
camera = create_camera(location=(0, -12, 5), rotation=(1.15, 0, 0))
|
||||
animate_camera_orbit(camera, center=(0, 0, 0), radius=12, height=5,
|
||||
start_frame=1, end_frame=TOTAL_FRAMES, revolutions=0.5)
|
||||
|
||||
# Generate Lorenz attractor points
|
||||
points = generate_lorenz_points(num_points=8000, scale=0.12)
|
||||
|
||||
# Create curve from points (sample every 4th point for performance)
|
||||
sampled_points = points[::4]
|
||||
|
||||
curve_data = bpy.data.curves.new(name="LorenzCurve", type='CURVE')
|
||||
curve_data.dimensions = '3D'
|
||||
curve_data.bevel_depth = 0.015
|
||||
curve_data.bevel_resolution = 4
|
||||
|
||||
spline = curve_data.splines.new('NURBS')
|
||||
spline.points.add(len(sampled_points) - 1)
|
||||
|
||||
for i, point in enumerate(sampled_points):
|
||||
spline.points[i].co = (point[0], point[1], point[2], 1)
|
||||
|
||||
spline.use_endpoint_u = True
|
||||
spline.order_u = 4
|
||||
|
||||
attractor = bpy.data.objects.new("LorenzAttractor", curve_data)
|
||||
bpy.context.collection.objects.link(attractor)
|
||||
|
||||
# Add material
|
||||
mat = create_emission_material("LorenzMat", COLORS["idk"], strength=2.0)
|
||||
attractor.data.materials.append(mat)
|
||||
|
||||
# Animate the curve drawing (using bevel factor)
|
||||
curve_data.bevel_factor_start = 0.0
|
||||
curve_data.bevel_factor_end = 0.0
|
||||
curve_data.keyframe_insert(data_path="bevel_factor_end", frame=1)
|
||||
|
||||
curve_data.bevel_factor_end = 1.0
|
||||
curve_data.keyframe_insert(data_path="bevel_factor_end", frame=450)
|
||||
|
||||
# Set linear interpolation for smooth drawing
|
||||
if curve_data.animation_data:
|
||||
for fc in curve_data.animation_data.action.fcurves:
|
||||
for kf in fc.keyframe_points:
|
||||
kf.interpolation = 'LINEAR'
|
||||
|
||||
# Create trailing particle
|
||||
bpy.ops.mesh.primitive_ico_sphere_add(radius=0.15, subdivisions=2, location=points[0])
|
||||
particle = bpy.context.active_object
|
||||
particle.name = "TrailingParticle"
|
||||
|
||||
particle_mat = create_emission_material("ParticleMat", COLORS["white"], strength=5.0)
|
||||
particle.data.materials.append(particle_mat)
|
||||
|
||||
# Animate particle along the last portion of the attractor
|
||||
particle_start_frame = 460
|
||||
particle_points = points[-200:]
|
||||
frames_per_point = max(1, (TOTAL_FRAMES - particle_start_frame - 30) // len(particle_points))
|
||||
|
||||
# Hide particle initially
|
||||
keyframe_scale(particle, 1, 0.01)
|
||||
keyframe_scale(particle, particle_start_frame - 1, 0.01)
|
||||
keyframe_scale(particle, particle_start_frame, 1.0)
|
||||
|
||||
for i, point in enumerate(particle_points):
|
||||
frame = particle_start_frame + i * frames_per_point
|
||||
if frame <= TOTAL_FRAMES - 30:
|
||||
keyframe_location(particle, frame, point)
|
||||
|
||||
# Add some chaos visualization - secondary attractor trails
|
||||
colors = [
|
||||
(0.8, 0.35, 0.6, 1.0), # Lighter pink
|
||||
(0.6, 0.25, 0.5, 1.0), # Darker pink
|
||||
]
|
||||
|
||||
for idx, color in enumerate(colors):
|
||||
# Generate with slightly different initial conditions
|
||||
alt_points = []
|
||||
x, y, z = 0.1 + idx * 0.01, 0.01 * idx, 0.01 * idx
|
||||
sigma, rho, beta = 10, 28, 8/3
|
||||
dt = 0.005
|
||||
|
||||
for _ in range(6000):
|
||||
dx = sigma * (y - x) * dt
|
||||
dy = (x * (rho - z) - y) * dt
|
||||
dz = (x * y - beta * z) * dt
|
||||
x += dx
|
||||
y += dy
|
||||
z += dz
|
||||
alt_points.append((x * 0.12, y * 0.12, (z - 25) * 0.12))
|
||||
|
||||
# Create secondary curve
|
||||
alt_sampled = alt_points[::6]
|
||||
|
||||
alt_curve_data = bpy.data.curves.new(name=f"LorenzAlt_{idx}", type='CURVE')
|
||||
alt_curve_data.dimensions = '3D'
|
||||
alt_curve_data.bevel_depth = 0.008
|
||||
|
||||
alt_spline = alt_curve_data.splines.new('NURBS')
|
||||
alt_spline.points.add(len(alt_sampled) - 1)
|
||||
|
||||
for i, point in enumerate(alt_sampled):
|
||||
alt_spline.points[i].co = (point[0], point[1], point[2], 1)
|
||||
|
||||
alt_spline.use_endpoint_u = True
|
||||
|
||||
alt_attractor = bpy.data.objects.new(f"LorenzAlt_{idx}", alt_curve_data)
|
||||
bpy.context.collection.objects.link(alt_attractor)
|
||||
|
||||
alt_mat = create_emission_material(f"LorenzAltMat_{idx}", color, strength=1.0)
|
||||
alt_attractor.data.materials.append(alt_mat)
|
||||
|
||||
# Animate drawing
|
||||
alt_curve_data.bevel_factor_end = 0.0
|
||||
alt_curve_data.keyframe_insert(data_path="bevel_factor_end", frame=1)
|
||||
|
||||
alt_curve_data.bevel_factor_end = 1.0
|
||||
alt_curve_data.keyframe_insert(data_path="bevel_factor_end", frame=400 + idx * 30)
|
||||
|
||||
return attractor, particle
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_idk_animation()
|
||||
|
||||
output_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
bpy.ops.wm.save_as_mainfile(filepath=os.path.join(output_dir, "exports", "track08_idk.blend"))
|
||||
|
||||
bpy.ops.export_scene.gltf(
|
||||
filepath=os.path.join(output_dir, "exports", "track08_idk.gltf"),
|
||||
export_animations=True,
|
||||
export_format='GLTF_SEPARATE'
|
||||
)
|
||||
|
||||
bpy.ops.wm.alembic_export(
|
||||
filepath=os.path.join(output_dir, "exports", "track08_idk.abc"),
|
||||
start=1, end=TOTAL_FRAMES
|
||||
)
|
||||
|
||||
print("Track 08 - IDK: Export complete!")
|
||||
164
surya-blender/tracks/track09_with_u.py
Normal file
164
surya-blender/tracks/track09_with_u.py
Normal file
@ -0,0 +1,164 @@
|
||||
"""
|
||||
Track 09: WITH U - Two Orbiting Souls/Spheres with Stars (THE TURN!)
|
||||
"""
|
||||
|
||||
import bpy
|
||||
import math
|
||||
import random
|
||||
import sys
|
||||
import os
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from utils import *
|
||||
|
||||
|
||||
def create_with_u_animation():
|
||||
"""Create the full Track 09 animation - the emotional centerpiece."""
|
||||
clear_scene()
|
||||
setup_scene(background_color=(0.047, 0.039, 0.035, 1.0)) # Warm dark
|
||||
|
||||
# Create camera
|
||||
camera = create_camera(location=(0, -12, 6), rotation=(1.1, 0, 0))
|
||||
animate_camera_orbit(camera, center=(0, 0, 0), radius=12, height=5,
|
||||
start_frame=1, end_frame=TOTAL_FRAMES, revolutions=0.35)
|
||||
|
||||
# Create Soul 1 (main gold)
|
||||
soul1 = create_sphere(location=(2, 0, 0), radius=0.4, segments=24, rings=16, name="Soul1")
|
||||
soul1_mat = create_emission_material("Soul1Mat", COLORS["with_u"], strength=4.0)
|
||||
soul1.data.materials.append(soul1_mat)
|
||||
|
||||
# Create Soul 2 (lighter gold)
|
||||
soul2 = create_sphere(location=(-2, 0, 0), radius=0.35, segments=24, rings=16, name="Soul2")
|
||||
soul2_mat = create_emission_material("Soul2Mat", (0.988, 0.827, 0.302, 1.0), strength=4.0)
|
||||
soul2.data.materials.append(soul2_mat)
|
||||
|
||||
# Entrance animation
|
||||
keyframe_scale(soul1, 1, 0.01)
|
||||
keyframe_scale(soul1, 60, 1.0)
|
||||
keyframe_scale(soul2, 1, 0.01)
|
||||
keyframe_scale(soul2, 60, 1.0)
|
||||
|
||||
# Orbital dance animation
|
||||
orbit_start = 90
|
||||
orbit_end = 550
|
||||
|
||||
for frame in range(orbit_start, orbit_end + 1, 3):
|
||||
t = (frame - orbit_start) / (orbit_end - orbit_start)
|
||||
angle = t * 2.5 * math.pi
|
||||
|
||||
# Elliptical orbits with vertical motion
|
||||
r1 = 2 + 0.3 * math.sin(angle * 2)
|
||||
r2 = 2 + 0.3 * math.sin(angle * 2 + math.pi)
|
||||
|
||||
pos1 = (
|
||||
r1 * math.cos(angle),
|
||||
r1 * math.sin(angle),
|
||||
0.5 * math.sin(angle * 3)
|
||||
)
|
||||
pos2 = (
|
||||
r2 * math.cos(angle + math.pi),
|
||||
r2 * math.sin(angle + math.pi),
|
||||
0.5 * math.sin(angle * 3 + math.pi)
|
||||
)
|
||||
|
||||
keyframe_location(soul1, frame, pos1)
|
||||
keyframe_location(soul2, frame, pos2)
|
||||
|
||||
# Create orbital trails using curves
|
||||
trail1_points = []
|
||||
trail2_points = []
|
||||
|
||||
for t in range(100):
|
||||
angle = t / 100 * 2.5 * math.pi
|
||||
r1 = 2 + 0.3 * math.sin(angle * 2)
|
||||
r2 = 2 + 0.3 * math.sin(angle * 2 + math.pi)
|
||||
|
||||
trail1_points.append((
|
||||
r1 * math.cos(angle),
|
||||
r1 * math.sin(angle),
|
||||
0.5 * math.sin(angle * 3)
|
||||
))
|
||||
trail2_points.append((
|
||||
r2 * math.cos(angle + math.pi),
|
||||
r2 * math.sin(angle + math.pi),
|
||||
0.5 * math.sin(angle * 3 + math.pi)
|
||||
))
|
||||
|
||||
trail1 = create_curve_from_points(trail1_points, name="Trail1", bevel_depth=0.015)
|
||||
trail1_mat = create_emission_material("Trail1Mat", COLORS["with_u"], strength=1.5)
|
||||
trail1.data.materials.append(trail1_mat)
|
||||
|
||||
trail2 = create_curve_from_points(trail2_points, name="Trail2", bevel_depth=0.015)
|
||||
trail2_mat = create_emission_material("Trail2Mat", (0.988, 0.827, 0.302, 1.0), strength=1.5)
|
||||
trail2.data.materials.append(trail2_mat)
|
||||
|
||||
# Animate trails appearing
|
||||
trail1.data.bevel_factor_end = 0.0
|
||||
trail1.data.keyframe_insert(data_path="bevel_factor_end", frame=orbit_start)
|
||||
trail1.data.bevel_factor_end = 1.0
|
||||
trail1.data.keyframe_insert(data_path="bevel_factor_end", frame=orbit_end)
|
||||
|
||||
trail2.data.bevel_factor_end = 0.0
|
||||
trail2.data.keyframe_insert(data_path="bevel_factor_end", frame=orbit_start)
|
||||
trail2.data.bevel_factor_end = 1.0
|
||||
trail2.data.keyframe_insert(data_path="bevel_factor_end", frame=orbit_end)
|
||||
|
||||
# EPIC MOMENT - Stars appear!
|
||||
stars_appear_frame = 480
|
||||
stars = create_star_field(count=200, radius=15, min_size=0.02, max_size=0.06)
|
||||
|
||||
for i, star in enumerate(stars):
|
||||
keyframe_scale(star, 1, 0.01)
|
||||
keyframe_scale(star, stars_appear_frame + i // 5, 0.01)
|
||||
keyframe_scale(star, stars_appear_frame + i // 5 + 30, 1.0)
|
||||
|
||||
# Souls come together
|
||||
union_start = 580
|
||||
union_end = 650
|
||||
|
||||
keyframe_location(soul1, union_start, soul1.location[:])
|
||||
keyframe_location(soul2, union_start, soul2.location[:])
|
||||
|
||||
keyframe_location(soul1, union_end, (0.35, 0, 0))
|
||||
keyframe_location(soul2, union_end, (-0.35, 0, 0))
|
||||
|
||||
# Glow brighter together
|
||||
# (In Blender, we'd animate material emission strength, but for export compatibility
|
||||
# we'll animate scale as a visual indicator)
|
||||
keyframe_scale(soul1, union_end, 1.0)
|
||||
keyframe_scale(soul1, union_end + 30, 1.5)
|
||||
keyframe_scale(soul2, union_end, 1.0)
|
||||
keyframe_scale(soul2, union_end + 30, 1.5)
|
||||
|
||||
# Dim stars when souls unite
|
||||
for star in stars:
|
||||
keyframe_scale(star, union_end, 1.0)
|
||||
keyframe_scale(star, union_end + 30, 0.7)
|
||||
|
||||
# Exit animation
|
||||
keyframe_scale(soul1, TOTAL_FRAMES - 30, 1.5)
|
||||
keyframe_scale(soul1, TOTAL_FRAMES, 0.01)
|
||||
keyframe_scale(soul2, TOTAL_FRAMES - 30, 1.5)
|
||||
keyframe_scale(soul2, TOTAL_FRAMES, 0.01)
|
||||
|
||||
return soul1, soul2, trail1, trail2, stars
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_with_u_animation()
|
||||
|
||||
output_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
bpy.ops.wm.save_as_mainfile(filepath=os.path.join(output_dir, "exports", "track09_with_u.blend"))
|
||||
|
||||
bpy.ops.export_scene.gltf(
|
||||
filepath=os.path.join(output_dir, "exports", "track09_with_u.gltf"),
|
||||
export_animations=True,
|
||||
export_format='GLTF_SEPARATE'
|
||||
)
|
||||
|
||||
bpy.ops.wm.alembic_export(
|
||||
filepath=os.path.join(output_dir, "exports", "track09_with_u.abc"),
|
||||
start=1, end=TOTAL_FRAMES
|
||||
)
|
||||
|
||||
print("Track 09 - With U: Export complete!")
|
||||
189
surya-blender/tracks/track14_hollow.py
Normal file
189
surya-blender/tracks/track14_hollow.py
Normal file
@ -0,0 +1,189 @@
|
||||
"""
|
||||
Track 14: HOLLOW - Golden Spiral to Moon (Epic Finale)
|
||||
"""
|
||||
|
||||
import bpy
|
||||
import math
|
||||
import random
|
||||
import sys
|
||||
import os
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from utils import *
|
||||
|
||||
|
||||
def generate_golden_spiral_points(num_points=300, scale=0.1):
|
||||
"""Generate points along a golden spiral."""
|
||||
phi = (1 + math.sqrt(5)) / 2 # Golden ratio
|
||||
points = []
|
||||
|
||||
for i in range(num_points):
|
||||
t = i / num_points * 6 * math.pi
|
||||
r = scale * phi ** (t / (math.pi / 2))
|
||||
|
||||
x = r * math.cos(t)
|
||||
y = r * math.sin(t)
|
||||
z = t * 0.05
|
||||
|
||||
points.append((x, y, z))
|
||||
|
||||
return points
|
||||
|
||||
|
||||
def create_hollow_animation():
|
||||
"""Create the full Track 14 animation - the grand finale."""
|
||||
clear_scene()
|
||||
setup_scene(background_color=COLORS["intro"]) # Deep purple
|
||||
|
||||
# Create camera with epic pullback
|
||||
camera = create_camera(location=(0, -10, 4), rotation=(1.15, 0, 0))
|
||||
|
||||
# Initial orbit
|
||||
for frame in range(1, 500):
|
||||
t = frame / 500
|
||||
angle = t * math.pi * 0.3
|
||||
|
||||
x = 10 * math.cos(angle)
|
||||
y = -10 * math.sin(angle) - 8
|
||||
z = 4 + t * 2
|
||||
|
||||
camera.location = (x, y, z)
|
||||
camera.keyframe_insert(data_path="location", frame=frame)
|
||||
|
||||
# Epic pullback
|
||||
for frame in range(500, TOTAL_FRAMES + 1):
|
||||
t = (frame - 500) / (TOTAL_FRAMES - 500)
|
||||
|
||||
# Pull back and rise
|
||||
zoom_out = 1 + t * 0.8
|
||||
x = 10 * zoom_out * math.cos(math.pi * 0.3)
|
||||
y = (-10 * math.sin(math.pi * 0.3) - 8) * zoom_out
|
||||
z = 6 + t * 4
|
||||
|
||||
camera.location = (x, y, z)
|
||||
camera.keyframe_insert(data_path="location", frame=frame)
|
||||
|
||||
# Generate golden spiral
|
||||
spiral_points = generate_golden_spiral_points(num_points=300, scale=0.1)
|
||||
|
||||
# Create spiral curve
|
||||
spiral_data = bpy.data.curves.new(name="GoldenSpiral", type='CURVE')
|
||||
spiral_data.dimensions = '3D'
|
||||
spiral_data.bevel_depth = 0.03
|
||||
spiral_data.bevel_resolution = 6
|
||||
|
||||
spline = spiral_data.splines.new('NURBS')
|
||||
spline.points.add(len(spiral_points) - 1)
|
||||
|
||||
for i, point in enumerate(spiral_points):
|
||||
spline.points[i].co = (point[0], point[1], point[2], 1)
|
||||
|
||||
spline.use_endpoint_u = True
|
||||
spline.order_u = 4
|
||||
|
||||
spiral = bpy.data.objects.new("GoldenSpiral", spiral_data)
|
||||
bpy.context.collection.objects.link(spiral)
|
||||
|
||||
spiral_mat = create_emission_material("SpiralMat", COLORS["hollow"], strength=3.0)
|
||||
spiral.data.materials.append(spiral_mat)
|
||||
|
||||
# Animate spiral drawing
|
||||
spiral_data.bevel_factor_end = 0.0
|
||||
spiral_data.keyframe_insert(data_path="bevel_factor_end", frame=1)
|
||||
|
||||
spiral_data.bevel_factor_end = 1.0
|
||||
spiral_data.keyframe_insert(data_path="bevel_factor_end", frame=300)
|
||||
|
||||
# Create Moon
|
||||
moon = create_sphere(location=(8, 5, 5), radius=1.5, segments=32, rings=24, name="Moon")
|
||||
moon_mat = create_emission_material("MoonMat", (0.996, 0.953, 0.780, 1.0), strength=2.5)
|
||||
moon.data.materials.append(moon_mat)
|
||||
|
||||
# Moon appears
|
||||
keyframe_scale(moon, 1, 0.01)
|
||||
keyframe_scale(moon, 200, 0.01)
|
||||
keyframe_scale(moon, 280, 1.0)
|
||||
|
||||
# Create stars
|
||||
stars = []
|
||||
star_mat = create_emission_material("StarMat", COLORS["white"], strength=4.0)
|
||||
|
||||
for i in range(150):
|
||||
pos = (
|
||||
random.uniform(-15, 15),
|
||||
random.uniform(-15, 15),
|
||||
random.uniform(-5, 12)
|
||||
)
|
||||
|
||||
bpy.ops.mesh.primitive_ico_sphere_add(
|
||||
radius=random.uniform(0.02, 0.05),
|
||||
subdivisions=1,
|
||||
location=pos
|
||||
)
|
||||
star = bpy.context.active_object
|
||||
star.name = f"Star_{i:03d}"
|
||||
star.data.materials.append(star_mat)
|
||||
|
||||
# Staggered appearance
|
||||
appear_frame = 320 + i * 2
|
||||
keyframe_scale(star, 1, 0.01)
|
||||
keyframe_scale(star, appear_frame, 0.01)
|
||||
keyframe_scale(star, appear_frame + 30, 1.0)
|
||||
|
||||
stars.append(star)
|
||||
|
||||
# Add some additional golden particles along the spiral
|
||||
particles = []
|
||||
particle_mat = create_emission_material("ParticleMat", COLORS["hollow"], strength=5.0)
|
||||
|
||||
for i in range(20):
|
||||
idx = int(i / 20 * len(spiral_points))
|
||||
pos = spiral_points[idx]
|
||||
|
||||
bpy.ops.mesh.primitive_ico_sphere_add(radius=0.06, subdivisions=1, location=pos)
|
||||
p = bpy.context.active_object
|
||||
p.name = f"SpiralParticle_{i:03d}"
|
||||
p.data.materials.append(particle_mat)
|
||||
|
||||
# Animate along spiral path
|
||||
appear_frame = 30 + int(i / 20 * 270)
|
||||
keyframe_scale(p, 1, 0.01)
|
||||
keyframe_scale(p, appear_frame, 1.0)
|
||||
|
||||
particles.append(p)
|
||||
|
||||
# Final glow effect - enlarge moon
|
||||
keyframe_scale(moon, 600, 1.0)
|
||||
keyframe_scale(moon, 680, 1.3)
|
||||
|
||||
# Spiral gets brighter (thicker)
|
||||
spiral_data.bevel_depth = 0.03
|
||||
spiral_data.keyframe_insert(data_path="bevel_depth", frame=600)
|
||||
spiral_data.bevel_depth = 0.05
|
||||
spiral_data.keyframe_insert(data_path="bevel_depth", frame=680)
|
||||
|
||||
# Gentle fade
|
||||
keyframe_scale(moon, TOTAL_FRAMES - 30, 1.3)
|
||||
keyframe_scale(moon, TOTAL_FRAMES, 0.01)
|
||||
|
||||
return spiral, moon, stars, particles
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_hollow_animation()
|
||||
|
||||
output_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
bpy.ops.wm.save_as_mainfile(filepath=os.path.join(output_dir, "exports", "track14_hollow.blend"))
|
||||
|
||||
bpy.ops.export_scene.gltf(
|
||||
filepath=os.path.join(output_dir, "exports", "track14_hollow.gltf"),
|
||||
export_animations=True,
|
||||
export_format='GLTF_SEPARATE'
|
||||
)
|
||||
|
||||
bpy.ops.wm.alembic_export(
|
||||
filepath=os.path.join(output_dir, "exports", "track14_hollow.abc"),
|
||||
start=1, end=TOTAL_FRAMES
|
||||
)
|
||||
|
||||
print("Track 14 - Hollow: Export complete!")
|
||||
238
surya-blender/utils.py
Normal file
238
surya-blender/utils.py
Normal file
@ -0,0 +1,238 @@
|
||||
"""
|
||||
SURYA Blender Utilities
|
||||
Shared functions for all track animations
|
||||
"""
|
||||
|
||||
import bpy
|
||||
import math
|
||||
from mathutils import Vector, Matrix, Euler
|
||||
import random
|
||||
|
||||
# Color palette (RGB normalized 0-1)
|
||||
COLORS = {
|
||||
"skin": (0.957, 0.447, 0.714, 1.0), # #f472b6 Pink
|
||||
"u_saved_me": (0.133, 0.827, 0.933, 1.0), # #22d3ee Cyan
|
||||
"nothing": (0.4, 0.4, 0.4, 1.0), # #666666 Grey
|
||||
"natures_call": (0.369, 0.918, 0.831, 1.0), # #5eead4 Teal
|
||||
"idk": (0.957, 0.447, 0.714, 1.0), # #f472b6 Pink
|
||||
"with_u": (0.984, 0.749, 0.141, 1.0), # #fbbf24 Gold
|
||||
"hollow": (0.984, 0.749, 0.141, 1.0), # #fbbf24 Gold
|
||||
"intro": (0.102, 0.039, 0.180, 1.0), # #1a0a2e Deep purple
|
||||
"white": (1.0, 1.0, 1.0, 1.0),
|
||||
"black": (0.0, 0.0, 0.0, 1.0),
|
||||
}
|
||||
|
||||
# Animation settings
|
||||
FPS = 30
|
||||
DURATION_SECONDS = 25
|
||||
TOTAL_FRAMES = FPS * DURATION_SECONDS # 750 frames
|
||||
|
||||
|
||||
def clear_scene():
|
||||
"""Remove all objects from the current scene."""
|
||||
bpy.ops.object.select_all(action='SELECT')
|
||||
bpy.ops.object.delete(use_global=False)
|
||||
|
||||
# Clear orphan data
|
||||
for block in bpy.data.meshes:
|
||||
if block.users == 0:
|
||||
bpy.data.meshes.remove(block)
|
||||
for block in bpy.data.materials:
|
||||
if block.users == 0:
|
||||
bpy.data.materials.remove(block)
|
||||
for block in bpy.data.curves:
|
||||
if block.users == 0:
|
||||
bpy.data.curves.remove(block)
|
||||
|
||||
|
||||
def create_scene(name):
|
||||
"""Create a new scene with the given name."""
|
||||
scene = bpy.data.scenes.new(name)
|
||||
bpy.context.window.scene = scene
|
||||
scene.frame_start = 1
|
||||
scene.frame_end = TOTAL_FRAMES
|
||||
scene.render.fps = FPS
|
||||
return scene
|
||||
|
||||
|
||||
def setup_scene(background_color=(0.1, 0.04, 0.18, 1.0)):
|
||||
"""Setup scene with camera and world settings."""
|
||||
# Set world background
|
||||
world = bpy.context.scene.world
|
||||
if world is None:
|
||||
world = bpy.data.worlds.new("World")
|
||||
bpy.context.scene.world = world
|
||||
|
||||
world.use_nodes = True
|
||||
bg_node = world.node_tree.nodes.get("Background")
|
||||
if bg_node:
|
||||
bg_node.inputs[0].default_value = background_color
|
||||
bg_node.inputs[1].default_value = 1.0 # Strength
|
||||
|
||||
# Set render settings
|
||||
bpy.context.scene.render.resolution_x = 1920
|
||||
bpy.context.scene.render.resolution_y = 1080
|
||||
|
||||
|
||||
def create_camera(location=(0, -10, 5), rotation=(1.1, 0, 0)):
|
||||
"""Create and setup camera."""
|
||||
bpy.ops.object.camera_add(location=location)
|
||||
camera = bpy.context.active_object
|
||||
camera.rotation_euler = Euler(rotation, 'XYZ')
|
||||
bpy.context.scene.camera = camera
|
||||
return camera
|
||||
|
||||
|
||||
def create_emission_material(name, color, strength=2.0):
|
||||
"""Create an emission material for glowing objects."""
|
||||
mat = bpy.data.materials.new(name=name)
|
||||
mat.use_nodes = True
|
||||
nodes = mat.node_tree.nodes
|
||||
links = mat.node_tree.links
|
||||
|
||||
# Clear default nodes
|
||||
nodes.clear()
|
||||
|
||||
# Create emission shader
|
||||
emission = nodes.new('ShaderNodeEmission')
|
||||
emission.inputs[0].default_value = color
|
||||
emission.inputs[1].default_value = strength
|
||||
|
||||
# Output
|
||||
output = nodes.new('ShaderNodeOutputMaterial')
|
||||
links.new(emission.outputs[0], output.inputs[0])
|
||||
|
||||
return mat
|
||||
|
||||
|
||||
def create_basic_material(name, color, metallic=0.0, roughness=0.5):
|
||||
"""Create a basic PBR material."""
|
||||
mat = bpy.data.materials.new(name=name)
|
||||
mat.use_nodes = True
|
||||
bsdf = mat.node_tree.nodes.get("Principled BSDF")
|
||||
if bsdf:
|
||||
bsdf.inputs["Base Color"].default_value = color
|
||||
bsdf.inputs["Metallic"].default_value = metallic
|
||||
bsdf.inputs["Roughness"].default_value = roughness
|
||||
return mat
|
||||
|
||||
|
||||
def create_sphere(location=(0, 0, 0), radius=1.0, segments=32, rings=16, name="Sphere"):
|
||||
"""Create a UV sphere."""
|
||||
bpy.ops.mesh.primitive_uv_sphere_add(
|
||||
radius=radius,
|
||||
segments=segments,
|
||||
ring_count=rings,
|
||||
location=location
|
||||
)
|
||||
obj = bpy.context.active_object
|
||||
obj.name = name
|
||||
return obj
|
||||
|
||||
|
||||
def create_icosahedron(location=(0, 0, 0), radius=1.0, name="Icosahedron"):
|
||||
"""Create an icosahedron."""
|
||||
bpy.ops.mesh.primitive_ico_sphere_add(
|
||||
radius=radius,
|
||||
subdivisions=1,
|
||||
location=location
|
||||
)
|
||||
obj = bpy.context.active_object
|
||||
obj.name = name
|
||||
return obj
|
||||
|
||||
|
||||
def create_curve_from_points(points, name="Curve", bevel_depth=0.02):
|
||||
"""Create a curve from a list of 3D points."""
|
||||
curve_data = bpy.data.curves.new(name=name, type='CURVE')
|
||||
curve_data.dimensions = '3D'
|
||||
curve_data.bevel_depth = bevel_depth
|
||||
curve_data.bevel_resolution = 4
|
||||
|
||||
spline = curve_data.splines.new('NURBS')
|
||||
spline.points.add(len(points) - 1)
|
||||
|
||||
for i, point in enumerate(points):
|
||||
spline.points[i].co = (point[0], point[1], point[2], 1)
|
||||
|
||||
spline.use_endpoint_u = True
|
||||
|
||||
curve_obj = bpy.data.objects.new(name, curve_data)
|
||||
bpy.context.collection.objects.link(curve_obj)
|
||||
|
||||
return curve_obj
|
||||
|
||||
|
||||
def keyframe_location(obj, frame, location):
|
||||
"""Set a location keyframe."""
|
||||
obj.location = location
|
||||
obj.keyframe_insert(data_path="location", frame=frame)
|
||||
|
||||
|
||||
def keyframe_scale(obj, frame, scale):
|
||||
"""Set a scale keyframe."""
|
||||
if isinstance(scale, (int, float)):
|
||||
scale = (scale, scale, scale)
|
||||
obj.scale = scale
|
||||
obj.keyframe_insert(data_path="scale", frame=frame)
|
||||
|
||||
|
||||
def keyframe_rotation(obj, frame, rotation):
|
||||
"""Set a rotation keyframe (Euler)."""
|
||||
obj.rotation_euler = rotation
|
||||
obj.keyframe_insert(data_path="rotation_euler", frame=frame)
|
||||
|
||||
|
||||
def animate_camera_orbit(camera, center=(0, 0, 0), radius=10, height=5,
|
||||
start_frame=1, end_frame=750, revolutions=1):
|
||||
"""Animate camera orbiting around a center point."""
|
||||
for frame in range(start_frame, end_frame + 1):
|
||||
t = (frame - start_frame) / (end_frame - start_frame)
|
||||
angle = t * 2 * math.pi * revolutions
|
||||
|
||||
x = center[0] + radius * math.cos(angle)
|
||||
y = center[1] + radius * math.sin(angle)
|
||||
z = center[2] + height
|
||||
|
||||
camera.location = (x, y, z)
|
||||
camera.keyframe_insert(data_path="location", frame=frame)
|
||||
|
||||
# Point at center
|
||||
direction = Vector(center) - Vector((x, y, z))
|
||||
rot_quat = direction.to_track_quat('-Z', 'Y')
|
||||
camera.rotation_euler = rot_quat.to_euler()
|
||||
camera.keyframe_insert(data_path="rotation_euler", frame=frame)
|
||||
|
||||
|
||||
def create_star_field(count=200, radius=20, min_size=0.02, max_size=0.08):
|
||||
"""Create a field of star particles."""
|
||||
stars = []
|
||||
for i in range(count):
|
||||
x = random.uniform(-radius, radius)
|
||||
y = random.uniform(-radius, radius)
|
||||
z = random.uniform(-radius/2, radius)
|
||||
size = random.uniform(min_size, max_size)
|
||||
|
||||
bpy.ops.mesh.primitive_ico_sphere_add(radius=size, subdivisions=1, location=(x, y, z))
|
||||
star = bpy.context.active_object
|
||||
star.name = f"Star_{i:03d}"
|
||||
|
||||
# Add emission material
|
||||
mat = create_emission_material(f"StarMat_{i:03d}", COLORS["white"], strength=5.0)
|
||||
star.data.materials.append(mat)
|
||||
stars.append(star)
|
||||
|
||||
return stars
|
||||
|
||||
|
||||
def smooth_interpolation(t):
|
||||
"""Smooth step interpolation (ease in-out)."""
|
||||
return t * t * (3 - 2 * t)
|
||||
|
||||
|
||||
def ease_in_out(t):
|
||||
"""Ease in-out cubic."""
|
||||
if t < 0.5:
|
||||
return 4 * t * t * t
|
||||
else:
|
||||
return 1 - pow(-2 * t + 2, 3) / 2
|
||||
Loading…
x
Reference in New Issue
Block a user