feat(skills): add nukepedia-tools skill
Search Nukepedia's 2300+ free Nuke tools catalog. Includes: - scrape.py: build catalog from nukepedia.com (rate-limited) - search.py: query by name, category, rating, author - Pre-scraped catalog with 2341 tools Categories: gizmos, python, plugins, toolsets, blink, hiero, etc. Support Nukepedia: https://nukepedia.com/donate
This commit is contained in:
parent
064b8af33f
commit
c1ea14975e
@ -131,9 +131,13 @@ For more information, check out:
|
||||
---
|
||||
|
||||
Full Skill Registry
|
||||
===
|
||||
===
|
||||
|
||||
{skills will be listed here}
|
||||
| Skill | Description |
|
||||
|-------|-------------|
|
||||
| **[aces-vfx](skills/aces-vfx)** | ACES color management for VFX. IDTs, ODTs, OCIO configs, debugging color issues, and working across cameras and deliveries. |
|
||||
| **[ae-to-nuke](skills/ae-to-nuke)** | Convert After Effects projects to Nuke. Layer translation, expression conversion, and workflow guidance. |
|
||||
| **[nukepedia-tools](skills/nukepedia-tools)** | Search Nukepedia's 2300+ free Nuke tools. Find gizmos, python scripts, plugins by name, category, or rating. Includes scraper to refresh catalog. |
|
||||
|
||||
Coming Soon
|
||||
---
|
||||
|
||||
102
skills/nukepedia-tools/SKILL.md
Normal file
102
skills/nukepedia-tools/SKILL.md
Normal file
@ -0,0 +1,102 @@
|
||||
---
|
||||
name: nukepedia-tools
|
||||
description: Search Nukepedia's 2300+ free Nuke tools to find gizmos, python scripts, plugins, and toolsets. Use when artist needs a Nuke gizmo, keyer, tracker, color tool, python script, Blink script, or any compositing utility. Does NOT download - only discovers tools and provides links. Always remind users to support Nukepedia.
|
||||
---
|
||||
|
||||
# Nukepedia Tools
|
||||
|
||||
Search the Nukepedia catalog to find free VFX tools for Nuke.
|
||||
|
||||
**Support Nukepedia:** https://nukepedia.com/donate
|
||||
|
||||
This skill discovers tools. It does NOT download anything. Users must visit nukepedia.com directly to download.
|
||||
|
||||
|
||||
## Searching the Catalog
|
||||
|
||||
Use `scripts/search.py` to query the catalog:
|
||||
|
||||
```bash
|
||||
# basic search
|
||||
python scripts/search.py keyer
|
||||
|
||||
# filter by category
|
||||
python scripts/search.py edge --category gizmos
|
||||
|
||||
# high-rated tools
|
||||
python scripts/search.py "" --category python --min-rating 4
|
||||
|
||||
# by author
|
||||
python scripts/search.py "" --author "falk"
|
||||
|
||||
# verbose output
|
||||
python scripts/search.py tracker --verbose
|
||||
|
||||
# json output for parsing
|
||||
python scripts/search.py blur --json
|
||||
```
|
||||
|
||||
Or use jq directly on `data/nukepedia-catalog.json`:
|
||||
|
||||
```bash
|
||||
# search by name
|
||||
jq '.tools[] | select(.name | test("keyer"; "i")) | {name, rating, url}' data/nukepedia-catalog.json
|
||||
|
||||
# top by downloads
|
||||
jq '[.tools[]] | sort_by(-.downloads) | .[0:10] | .[] | {name, downloads, url}' data/nukepedia-catalog.json
|
||||
```
|
||||
|
||||
|
||||
## Categories
|
||||
|
||||
| category | description |
|
||||
|----------|-------------|
|
||||
| gizmos | Node groups - keyers, filters, effects, color (~1286) |
|
||||
| python | Scripts for UI, pipeline, automation (~657) |
|
||||
| plugins | Compiled C++ plugins (~116) |
|
||||
| toolsets | Collections of related gizmos (~127) |
|
||||
| blink | GPU-accelerated BlinkScript nodes (~79) |
|
||||
| miscellaneous | Docs, tutorials, templates (~46) |
|
||||
| hiero | Tools for Hiero/NukeStudio (~38) |
|
||||
| presets | Node presets (~6) |
|
||||
| tcl-scripts | Legacy Tcl scripts (~1) |
|
||||
|
||||
|
||||
## Response Format
|
||||
|
||||
When recommending tools:
|
||||
|
||||
```
|
||||
## Tool Name
|
||||
**Category:** {category} / {subcategory}
|
||||
**Rating:** {rating}/5
|
||||
**Author:** {author}
|
||||
|
||||
{description}
|
||||
|
||||
**Link:** {url}
|
||||
```
|
||||
|
||||
Always end with:
|
||||
```
|
||||
Support Nukepedia: https://nukepedia.com/donate
|
||||
```
|
||||
|
||||
|
||||
## Updating the Catalog
|
||||
|
||||
Refresh the catalog (rate limited to 1 req/sec):
|
||||
|
||||
```bash
|
||||
python scripts/scrape.py
|
||||
```
|
||||
|
||||
For full details on each tool:
|
||||
```bash
|
||||
python scripts/scrape.py --full
|
||||
```
|
||||
|
||||
Resume interrupted scrape:
|
||||
```bash
|
||||
python scripts/scrape.py --resume
|
||||
```
|
||||
49175
skills/nukepedia-tools/data/nukepedia-catalog.json
Normal file
49175
skills/nukepedia-tools/data/nukepedia-catalog.json
Normal file
File diff suppressed because it is too large
Load Diff
353
skills/nukepedia-tools/scripts/scrape.py
Normal file
353
skills/nukepedia-tools/scripts/scrape.py
Normal file
@ -0,0 +1,353 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Scrape Nukepedia.com to build a searchable catalog of VFX tools.
|
||||
|
||||
Usage:
|
||||
python scrape.py [--full] [--resume] [--output PATH]
|
||||
|
||||
Options:
|
||||
--full Fetch detail pages for each tool (slower, ~2400 requests)
|
||||
--resume Resume an interrupted scrape
|
||||
--output Output path (default: ../data/nukepedia-catalog.json)
|
||||
|
||||
Rate limited to 1 req/sec. Please support Nukepedia: https://nukepedia.com/donate
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
try:
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
except ImportError:
|
||||
print("Missing dependencies. Install with:")
|
||||
print(" pip install requests beautifulsoup4")
|
||||
sys.exit(1)
|
||||
|
||||
BASE_URL = "https://nukepedia.com"
|
||||
USER_AGENT = "gizmosearch/0.1 (VFX tool catalog builder; respects robots.txt)"
|
||||
REQUEST_DELAY = 1.0
|
||||
MAX_RETRIES = 3
|
||||
PROGRESS_FILE = ".gizmosearch_progress.json"
|
||||
|
||||
CATEGORIES = [
|
||||
("gizmos", ["deep", "image", "particles", "draw", "time", "channel",
|
||||
"colour", "filter", "keyer", "merge", "transform", "3d",
|
||||
"stereo", "metadata", "other"]),
|
||||
("python", ["import-export", "render", "flipbook", "misc", "3d",
|
||||
"nodegraph", "ui", "deep"]),
|
||||
("plugins", ["image", "time", "draw", "channel", "colour", "filter",
|
||||
"keyer", "merge", "transform", "3d", "other"]),
|
||||
("toolsets", ["deep", "image", "particles", "draw", "time", "channel",
|
||||
"colour", "filter", "keyer", "merge", "transform", "3d",
|
||||
"stereo", "metadata", "other"]),
|
||||
("blink", ["deep", "image", "particles", "draw", "time", "channel",
|
||||
"colour", "filter", "keyer", "merge", "transform", "3d",
|
||||
"stereo", "metadata", "other"]),
|
||||
("miscellaneous", []),
|
||||
("hiero", ["python", "softeffects"]),
|
||||
("presets", []),
|
||||
("tcl-scripts", []),
|
||||
]
|
||||
|
||||
|
||||
@dataclass
|
||||
class Tool:
|
||||
id: str
|
||||
name: str
|
||||
category: str
|
||||
subcategory: str
|
||||
author: str = ""
|
||||
description: str = ""
|
||||
rating: Optional[float] = None
|
||||
rating_count: int = 0
|
||||
downloads: int = 0
|
||||
nuke_versions: str = ""
|
||||
platforms: list = field(default_factory=list)
|
||||
license: Optional[str] = None
|
||||
url: str = ""
|
||||
last_updated: Optional[str] = None
|
||||
scraped_at: str = ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class Catalog:
|
||||
version: str
|
||||
scraped_at: str
|
||||
tool_count: int
|
||||
nukepedia_support: dict
|
||||
tools: list
|
||||
|
||||
|
||||
def get_support_info():
|
||||
return {
|
||||
"message": "Nukepedia is a free, community-run resource serving VFX artists since 2008. Please support them!",
|
||||
"donate_url": "https://nukepedia.com/donate",
|
||||
"prouser_url": "https://nukepedia.com/prouser",
|
||||
"website": "https://nukepedia.com",
|
||||
"contribute_url": "https://nukepedia.com/my-uploads/new/"
|
||||
}
|
||||
|
||||
|
||||
class Scraper:
|
||||
def __init__(self, full_scrape=False, resume=False):
|
||||
self.session = requests.Session()
|
||||
self.session.headers["User-Agent"] = USER_AGENT
|
||||
self.full_scrape = full_scrape
|
||||
self.tools = []
|
||||
self.completed_urls = set()
|
||||
|
||||
if resume:
|
||||
self._load_progress()
|
||||
|
||||
def _load_progress(self):
|
||||
try:
|
||||
with open(PROGRESS_FILE) as f:
|
||||
data = json.load(f)
|
||||
self.tools = [Tool(**t) for t in data.get("tools", [])]
|
||||
self.completed_urls = set(data.get("completed_urls", []))
|
||||
print(f" Resumed: {len(self.tools)} tools, {len(self.completed_urls)} pages")
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
def _save_progress(self):
|
||||
with open(PROGRESS_FILE, "w") as f:
|
||||
json.dump({
|
||||
"tools": [asdict(t) for t in self.tools],
|
||||
"completed_urls": list(self.completed_urls)
|
||||
}, f)
|
||||
|
||||
def _cleanup_progress(self):
|
||||
Path(PROGRESS_FILE).unlink(missing_ok=True)
|
||||
|
||||
def _fetch(self, url: str) -> Optional[str]:
|
||||
for attempt in range(MAX_RETRIES):
|
||||
try:
|
||||
resp = self.session.get(url, timeout=30, allow_redirects=True)
|
||||
if resp.status_code == 200:
|
||||
return resp.text
|
||||
elif resp.status_code == 429:
|
||||
wait = (2 ** attempt) * REQUEST_DELAY
|
||||
print(f" Rate limited, waiting {wait}s...")
|
||||
time.sleep(wait)
|
||||
else:
|
||||
print(f" HTTP {resp.status_code}: {url}")
|
||||
return None
|
||||
except requests.RequestException as e:
|
||||
if attempt < MAX_RETRIES - 1:
|
||||
time.sleep(REQUEST_DELAY * (attempt + 1))
|
||||
else:
|
||||
print(f" Error fetching {url}: {e}")
|
||||
return None
|
||||
return None
|
||||
|
||||
def scrape(self) -> Catalog:
|
||||
print("\n GizmoSearch")
|
||||
print(" ===========\n")
|
||||
print(" Building a catalog of free VFX tools from nukepedia.com")
|
||||
print(" Please support Nukepedia: https://nukepedia.com/donate\n")
|
||||
|
||||
for cat_name, subcats in CATEGORIES:
|
||||
if subcats:
|
||||
for subcat in subcats:
|
||||
self._scrape_category(cat_name, subcat)
|
||||
else:
|
||||
self._scrape_category(cat_name, "")
|
||||
|
||||
if self.full_scrape:
|
||||
self._scrape_detail_pages()
|
||||
|
||||
self._cleanup_progress()
|
||||
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
catalog = Catalog(
|
||||
version="1.0",
|
||||
scraped_at=now,
|
||||
tool_count=len(self.tools),
|
||||
nukepedia_support=get_support_info(),
|
||||
tools=[asdict(t) for t in self.tools]
|
||||
)
|
||||
|
||||
print(f"\n Scraped {len(self.tools)} tools")
|
||||
print(" Support Nukepedia: https://nukepedia.com/donate\n")
|
||||
|
||||
return catalog
|
||||
|
||||
def _scrape_category(self, category: str, subcategory: str):
|
||||
if subcategory:
|
||||
url = f"{BASE_URL}/tools/{category}/{subcategory}"
|
||||
display = f"{category}/{subcategory}"
|
||||
else:
|
||||
url = f"{BASE_URL}/tools/{category}"
|
||||
display = category
|
||||
|
||||
if url in self.completed_urls:
|
||||
print(f" {display} (cached)")
|
||||
return
|
||||
|
||||
print(f" {display}...", end="", flush=True)
|
||||
|
||||
page = 1
|
||||
count = 0
|
||||
while True:
|
||||
page_url = f"{url}?page={page}" if page > 1 else url
|
||||
html = self._fetch(page_url)
|
||||
if not html:
|
||||
break
|
||||
|
||||
soup = BeautifulSoup(html, "html.parser")
|
||||
tools = self._parse_tool_cards(soup, category, subcategory or "general")
|
||||
|
||||
if not tools:
|
||||
break
|
||||
|
||||
self.tools.extend(tools)
|
||||
count += len(tools)
|
||||
|
||||
# check for next page
|
||||
next_link = soup.select_one(".pagination a.next, a[rel='next']")
|
||||
if not next_link or "disabled" in next_link.get("class", []):
|
||||
break
|
||||
|
||||
page += 1
|
||||
time.sleep(REQUEST_DELAY)
|
||||
|
||||
self.completed_urls.add(url)
|
||||
self._save_progress()
|
||||
print(f" {count} tools")
|
||||
time.sleep(REQUEST_DELAY)
|
||||
|
||||
def _parse_tool_cards(self, soup: BeautifulSoup, category: str, subcategory: str) -> list:
|
||||
"""Parse tool cards from listing page. Nukepedia uses <a class="tool-card"> with data attributes."""
|
||||
tools = []
|
||||
|
||||
# find all tool cards - they're <a> tags with class "tool-card" and data attributes
|
||||
cards = soup.select("a.tool-card[data-name]")
|
||||
|
||||
for card in cards:
|
||||
href = card.get("href", "")
|
||||
if not href or not href.startswith("/tools/"):
|
||||
continue
|
||||
|
||||
url = f"{BASE_URL}{href}"
|
||||
tool_id = href.rstrip("/").split("/")[-1]
|
||||
|
||||
name = card.get("data-name", tool_id)
|
||||
author = card.get("data-author", "")
|
||||
downloads = int(card.get("data-downloads", 0) or 0)
|
||||
date_str = card.get("data-date", "")
|
||||
rating_str = card.get("data-rating", "")
|
||||
|
||||
rating = None
|
||||
if rating_str:
|
||||
try:
|
||||
rating = float(rating_str)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
last_updated = None
|
||||
if date_str:
|
||||
last_updated = date_str
|
||||
|
||||
# get description from the card content
|
||||
desc_el = card.select_one(".description, .excerpt, p")
|
||||
description = desc_el.get_text(strip=True) if desc_el else ""
|
||||
|
||||
# get rating count if available
|
||||
rating_count = 0
|
||||
rating_el = card.select_one(".rating-count, .votes")
|
||||
if rating_el:
|
||||
match = re.search(r"(\d+)", rating_el.get_text())
|
||||
if match:
|
||||
rating_count = int(match.group(1))
|
||||
|
||||
tools.append(Tool(
|
||||
id=tool_id,
|
||||
name=name,
|
||||
category=category,
|
||||
subcategory=subcategory,
|
||||
author=author,
|
||||
description=description,
|
||||
rating=rating,
|
||||
rating_count=rating_count,
|
||||
downloads=downloads,
|
||||
url=url,
|
||||
platforms=["linux", "mac", "windows"],
|
||||
last_updated=last_updated,
|
||||
scraped_at=datetime.now(timezone.utc).isoformat()
|
||||
))
|
||||
|
||||
return tools
|
||||
|
||||
def _scrape_detail_pages(self):
|
||||
tools_needing_details = [(i, t) for i, t in enumerate(self.tools) if not t.description]
|
||||
|
||||
if not tools_needing_details:
|
||||
return
|
||||
|
||||
print(f"\n Fetching {len(tools_needing_details)} detail pages...")
|
||||
|
||||
for idx, (i, tool) in enumerate(tools_needing_details):
|
||||
if tool.url in self.completed_urls:
|
||||
continue
|
||||
|
||||
html = self._fetch(tool.url)
|
||||
if html:
|
||||
self._parse_detail_page(html, self.tools[i])
|
||||
self.completed_urls.add(tool.url)
|
||||
|
||||
if (idx + 1) % 50 == 0:
|
||||
self._save_progress()
|
||||
print(f" {idx + 1}/{len(tools_needing_details)}")
|
||||
|
||||
time.sleep(REQUEST_DELAY)
|
||||
|
||||
def _parse_detail_page(self, html: str, tool: Tool):
|
||||
soup = BeautifulSoup(html, "html.parser")
|
||||
|
||||
# description
|
||||
desc_el = soup.select_one(".tool-description, .description, #description, .content p")
|
||||
if desc_el:
|
||||
tool.description = " ".join(desc_el.get_text().split())[:500]
|
||||
|
||||
# nuke versions
|
||||
ver_el = soup.select_one(".nuke-version, .compatibility, [class*='version']")
|
||||
if ver_el:
|
||||
tool.nuke_versions = ver_el.get_text(strip=True)
|
||||
|
||||
# license
|
||||
lic_el = soup.select_one(".license, [class*='license']")
|
||||
if lic_el:
|
||||
tool.license = lic_el.get_text(strip=True)
|
||||
|
||||
tool.scraped_at = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Scrape Nukepedia for VFX tools")
|
||||
parser.add_argument("--full", action="store_true", help="Fetch detail pages")
|
||||
parser.add_argument("--resume", action="store_true", help="Resume interrupted scrape")
|
||||
parser.add_argument("--output", default=None, help="Output path")
|
||||
args = parser.parse_args()
|
||||
|
||||
output = Path(args.output) if args.output else Path(__file__).parent.parent / "data" / "nukepedia-catalog.json"
|
||||
output.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
scraper = Scraper(full_scrape=args.full, resume=args.resume)
|
||||
catalog = scraper.scrape()
|
||||
|
||||
with open(output, "w") as f:
|
||||
json.dump(asdict(catalog), f, indent=2)
|
||||
|
||||
print(f" Saved to {output}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
171
skills/nukepedia-tools/scripts/search.py
Normal file
171
skills/nukepedia-tools/scripts/search.py
Normal file
@ -0,0 +1,171 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Search the Nukepedia tool catalog.
|
||||
|
||||
Usage:
|
||||
python search.py <query> [options]
|
||||
|
||||
Examples:
|
||||
python search.py keyer
|
||||
python search.py edge --category gizmos
|
||||
python search.py "" --category python --min-rating 4
|
||||
python search.py tracker --verbose
|
||||
|
||||
Support Nukepedia: https://nukepedia.com/donate
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import math
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
|
||||
def load_catalog(path: Path) -> dict:
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def search(catalog: dict, query: str = "", category: str = None,
|
||||
subcategory: str = None, author: str = None,
|
||||
min_rating: float = None, min_downloads: int = None,
|
||||
limit: int = 10) -> list:
|
||||
"""Search tools matching criteria."""
|
||||
results = []
|
||||
query_lower = query.lower() if query else ""
|
||||
|
||||
for tool in catalog.get("tools", []):
|
||||
# text search
|
||||
if query_lower:
|
||||
searchable = f"{tool['name']} {tool['description']} {tool['author']} {tool['subcategory']}".lower()
|
||||
if query_lower not in searchable:
|
||||
continue
|
||||
|
||||
# category filter
|
||||
if category and tool["category"].lower() != category.lower():
|
||||
continue
|
||||
|
||||
# subcategory filter
|
||||
if subcategory and subcategory.lower() not in tool["subcategory"].lower():
|
||||
continue
|
||||
|
||||
# author filter
|
||||
if author and author.lower() not in tool["author"].lower():
|
||||
continue
|
||||
|
||||
# rating filter
|
||||
if min_rating is not None:
|
||||
if tool["rating"] is None or tool["rating"] < min_rating:
|
||||
continue
|
||||
|
||||
# downloads filter
|
||||
if min_downloads is not None:
|
||||
if tool["downloads"] < min_downloads:
|
||||
continue
|
||||
|
||||
results.append(tool)
|
||||
|
||||
# sort by relevance
|
||||
def score(t):
|
||||
s = 0.0
|
||||
if query_lower:
|
||||
if t["name"].lower() == query_lower:
|
||||
s += 100
|
||||
elif t["name"].lower().startswith(query_lower):
|
||||
s += 50
|
||||
elif query_lower in t["name"].lower():
|
||||
s += 25
|
||||
if query_lower in t["description"].lower():
|
||||
s += 10
|
||||
if t["rating"]:
|
||||
s += t["rating"] * 5
|
||||
if t["downloads"] > 0:
|
||||
s += math.log10(t["downloads"]) * 2
|
||||
return s
|
||||
|
||||
results.sort(key=score, reverse=True)
|
||||
return results[:limit] if limit else results
|
||||
|
||||
|
||||
def format_brief(tool: dict) -> str:
|
||||
rating = f" ({tool['rating']:.1f}/5)" if tool["rating"] else ""
|
||||
desc = tool["description"][:80] + "..." if len(tool["description"]) > 80 else tool["description"]
|
||||
return f" {tool['name']} - {tool['category'].title()}{rating}\n {desc}\n {tool['url']}"
|
||||
|
||||
|
||||
def format_verbose(tool: dict) -> str:
|
||||
lines = [
|
||||
f"\n {tool['name']}",
|
||||
f" {'=' * len(tool['name'])}",
|
||||
f" Category: {tool['category'].title()} / {tool['subcategory']}",
|
||||
f" Author: {tool['author'] or 'Unknown'}",
|
||||
]
|
||||
if tool["rating"]:
|
||||
lines.append(f" Rating: {tool['rating']:.1f}/5 ({tool['rating_count']} votes)")
|
||||
lines.append(f" Downloads: {tool['downloads']}")
|
||||
if tool["nuke_versions"]:
|
||||
lines.append(f" Nuke versions: {tool['nuke_versions']}")
|
||||
if tool["platforms"]:
|
||||
lines.append(f" Platforms: {', '.join(p.title() for p in tool['platforms'])}")
|
||||
if tool["license"]:
|
||||
lines.append(f" License: {tool['license']}")
|
||||
lines.append(f" URL: {tool['url']}")
|
||||
if tool["description"]:
|
||||
lines.append(f"\n {tool['description']}")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Search Nukepedia tool catalog")
|
||||
parser.add_argument("query", nargs="?", default="", help="Search query")
|
||||
parser.add_argument("--category", "-c", help="Filter by category")
|
||||
parser.add_argument("--subcategory", "-s", help="Filter by subcategory")
|
||||
parser.add_argument("--author", "-a", help="Filter by author")
|
||||
parser.add_argument("--min-rating", type=float, help="Minimum rating (1-5)")
|
||||
parser.add_argument("--min-downloads", type=int, help="Minimum downloads")
|
||||
parser.add_argument("--limit", "-l", type=int, default=10, help="Max results")
|
||||
parser.add_argument("--verbose", "-v", action="store_true", help="Verbose output")
|
||||
parser.add_argument("--catalog", default=None, help="Path to catalog JSON")
|
||||
parser.add_argument("--json", action="store_true", help="Output as JSON")
|
||||
args = parser.parse_args()
|
||||
|
||||
catalog_path = Path(args.catalog) if args.catalog else Path(__file__).parent.parent / "data" / "nukepedia-catalog.json"
|
||||
|
||||
if not catalog_path.exists():
|
||||
print(f"Catalog not found: {catalog_path}")
|
||||
print("Run scrape.py first to build the catalog.")
|
||||
sys.exit(1)
|
||||
|
||||
catalog = load_catalog(catalog_path)
|
||||
results = search(
|
||||
catalog,
|
||||
query=args.query,
|
||||
category=args.category,
|
||||
subcategory=args.subcategory,
|
||||
author=args.author,
|
||||
min_rating=args.min_rating,
|
||||
min_downloads=args.min_downloads,
|
||||
limit=args.limit
|
||||
)
|
||||
|
||||
if args.json:
|
||||
print(json.dumps(results, indent=2))
|
||||
return
|
||||
|
||||
if not results:
|
||||
print("No tools found matching your query.")
|
||||
return
|
||||
|
||||
for tool in results:
|
||||
if args.verbose:
|
||||
print(format_verbose(tool))
|
||||
else:
|
||||
print(format_brief(tool))
|
||||
print()
|
||||
|
||||
print(f"{len(results)} tool(s) found. Support Nukepedia: https://nukepedia.com/donate")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Loading…
x
Reference in New Issue
Block a user