beeble-forensic-analysis/evidence/marketing_claims.md
2026-01-26 11:57:40 -07:00

6.6 KiB

Beeble Marketing Claims Archive

Exact quotes from Beeble's public-facing pages, archived January 2026. These are provided for reference so readers can compare marketing language against the technical findings documented in this repository.

All quotes are reproduced verbatim. Emphasis (bold) is preserved as it appears on the original pages.

beeble.ai/beeble-studio

Page title: "Production-Grade 4K AI Relighting, Fully On-Device"

Main heading: "4K Relighting Fully on Your Machine"

Subheading: "DESKTOP APP FOR LOCAL PROCESSING"

Feature description (Local AI Model section):

Run our most advanced Video-to-PBR model, SwitchLight 3, directly on your desktop. Perfect for professionals who need local workflow.

Video-to-Assets feature card:

PBR, Alpha & Depth Map Generation Powered by SwitchLight 3.0, convert images and videos into full PBR passes with alpha and depth maps for seamless relighting, background removal, and advanced compositing.

FAQ on the same page ("What is Beeble Studio?"):

Beeble Studio is our desktop application that runs entirely on your local hardware. It provides:

  • Local GPU Processing: Process up to 4K and 1 hour using your NVIDIA GPU
  • Unlimited Rendering: No credit consumption for Video-to-VFX
  • Complete Privacy: Your files never leave your machine

Same AI Models: Access to SwitchLight 3.0 and all core features

Source: https://beeble.ai/beeble-studio

beeble.ai/research/switchlight-3-0-is-here

Published: November 5, 2025

Headline: "Introducing the best Video-to-PBR model in the world"

Opening paragraph:

SwitchLight 3.0 is the best Video-to-PBR model in the world. It delivers unmatched quality in generating physically based rendering (PBR) passes from any video, setting a new standard for VFX professionals and filmmakers who demand true production-level relighting.

On the "true video model" claim:

True Video Model: For the first time, SwitchLight is a true video model that processes multiple frames simultaneously. Earlier versions relied on single-frame image processing followed by a separate deflicker step. In version 3.0, temporal consistency is built directly into the model, delivering smoother, more stable results while preserving sharp detail.

On training data:

10x Larger Training Set: Trained on a dataset ten times bigger, SwitchLight 3.0 captures a broader range of lighting conditions, materials, and environments for more realistic results.

Detail quality claim:

SwitchLight 3.0 achieves a new level of detail and visual clarity. Compared to 2.0, it captures finer facial definition, intricate fabric wrinkles, and sophisticated surface patterns with far greater accuracy.

Motion handling:

SwitchLight 3.0 is a true end-to-end video model that understands motion natively. Unlike SwitchLight 2.0, which relied on an image model and post-smoothing, it eliminates flicker and ghosting even in extreme motion, shaking cameras, or vibrating subjects.

Source: https://beeble.ai/research/switchlight-3-0-is-here

docs.beeble.ai/help/faq

"Is Beeble's AI trained responsibly?":

Yes. Beeble's proprietary models--such as SwitchLight--are trained only on ethically sourced, agreement-based datasets. We never use scraped or unauthorized data.

When open-source models are included, we choose them carefully--only those with published research papers that disclose their training data and carry valid commercial-use licenses.

"What is Video-to-VFX?":

Video-to-VFX uses our foundation model, SwitchLight 3.0, and SOTA AI models to convert your footage into VFX-ready assets by generating:

  • PBR Maps: Normal, Base color, Metallic, Roughness, Specular for relighting
  • Alpha: foreground matte for background replacement
  • Depth Map: For compositing and 3D integration

Source: https://docs.beeble.ai/help/faq

docs.beeble.ai/beeble-studio/video-to-vfx

Product documentation page:

Video-to-VFX uses our foundation model, SwitchLight 3.0, to convert your footage into VFX-ready assets.

Key Features section:

PBR, Alpha & Depth Pass Generation Powered by SwitchLight 3.0. Convert footage into full PBR passes with alpha and depth maps.

Source: https://docs.beeble.ai/beeble-studio/video-to-vfx

Investor and press coverage

Seed funding (July 2024)

Beeble raised a $4.75M seed round at a reported $25M valuation. The round was led by Basis Set Ventures and Fika Ventures. At the time of funding, the company had approximately 7 employees.

Press coverage from the funding round:

Beeble [...] has raised $4.75 million in seed funding to develop its foundational model for AI-powered relighting in video.

Source: TechCrunch and other outlets, July 2024

Investor quotes (public press releases):

Basis Set Ventures described SwitchLight as a "world-class foundational model in lighting" in their investment rationale.

The term "foundational model" is significant. In the AI industry, it implies a large-scale, general-purpose model trained from scratch on diverse data--models like GPT-4, DALL-E, or Stable Diffusion. The technical evidence suggests Beeble's pipeline is a collection of open-source models (some used directly, others used as architectural building blocks) with proprietary weights trained on domain-specific data. Whether this constitutes a "foundational model" is a characterization question, but it is a characterization that was used to secure investment.

As of January 2026, the company appears to have approximately 9 employees.

Notable patterns

Beeble's marketing consistently attributes the entire Video-to-VFX pipeline to SwitchLight 3.0. The Beeble Studio page states that PBR, alpha, and depth map generation are all "Powered by SwitchLight 3.0."

The FAQ is the only place where Beeble acknowledges the use of open-source models, stating they "choose them carefully" and select those with "valid commercial-use licenses." However, the FAQ does not name any of the specific open-source models used, nor does it clarify which pipeline stages use open-source components versus SwitchLight.

The overall marketing impression is that SwitchLight is responsible for all output passes. The technical reality, as documented in this repository's analysis, is that background removal (alpha) and depth estimation are produced by open-source models used off the shelf, and the PBR decomposition models appear to be architecturally built from open-source frameworks (segmentation_models_pytorch, timm backbones) with proprietary trained weights. See docs/REPORT.md for the full analysis.