Version evolution (SL 1.0→2.0→3.0), team background, no patents, NVIDIA DiffusionRenderer as open-source competitor, dataset landscape (POLAR, SynthLight, etc.), botocore/AWS SDK in privacy app, MetaHuman EULA fix, user data controversy, and DiffusionRenderer ComfyUI integration across all docs.
340 lines
13 KiB
Markdown
340 lines
13 KiB
Markdown
# Beeble Marketing Claims Archive
|
|
|
|
Exact quotes from Beeble's public-facing pages, archived January 2026.
|
|
These are provided for reference so readers can compare marketing
|
|
language against the technical findings documented in this repository.
|
|
|
|
All quotes are reproduced verbatim. Emphasis (bold) is preserved as
|
|
it appears on the original pages.
|
|
|
|
|
|
## beeble.ai/beeble-studio
|
|
|
|
Page title: "Production-Grade 4K AI Relighting, Fully On-Device"
|
|
|
|
Main heading: "4K Relighting Fully on Your Machine"
|
|
|
|
Subheading: "DESKTOP APP FOR LOCAL PROCESSING"
|
|
|
|
Feature description (Local AI Model section):
|
|
> Run our most advanced **Video-to-PBR model, SwitchLight 3**, directly
|
|
> on your desktop. Perfect for professionals who need local workflow.
|
|
|
|
Video-to-Assets feature card:
|
|
> **PBR, Alpha & Depth Map Generation**
|
|
> Powered by **SwitchLight 3.0**, convert images and videos into
|
|
> **full PBR passes with alpha and depth maps** for seamless
|
|
> relighting, background removal, and advanced compositing.
|
|
|
|
FAQ on the same page ("What is Beeble Studio?"):
|
|
> **Beeble Studio** is our desktop application that runs entirely on
|
|
> your local hardware. It provides:
|
|
> - Local GPU Processing: Process up to 4K and 1 hour using your
|
|
> NVIDIA GPU
|
|
> - Unlimited Rendering: No credit consumption for Video-to-VFX
|
|
> - Complete Privacy: Your files never leave your machine
|
|
>
|
|
> Same AI Models: Access to SwitchLight 3.0 and all core features
|
|
|
|
Source: https://beeble.ai/beeble-studio
|
|
|
|
|
|
## beeble.ai/research/switchlight-3-0-is-here
|
|
|
|
Published: November 5, 2025
|
|
|
|
Headline: "Introducing the best Video-to-PBR model in the world"
|
|
|
|
Opening paragraph:
|
|
> SwitchLight 3.0 is the best Video-to-PBR model in the world. It
|
|
> delivers unmatched quality in generating physically based rendering
|
|
> (PBR) passes from any video, setting a new standard for VFX
|
|
> professionals and filmmakers who demand true production-level
|
|
> relighting.
|
|
|
|
On the "true video model" claim:
|
|
> **True Video Model:** For the first time, SwitchLight is a **true
|
|
> video model** that processes multiple frames simultaneously. Earlier
|
|
> versions relied on single-frame image processing followed by a
|
|
> separate deflicker step. In version 3.0, temporal consistency is
|
|
> built directly into the model, delivering smoother, more stable
|
|
> results while preserving sharp detail.
|
|
|
|
On training data:
|
|
> **10x Larger Training Set:** Trained on a dataset ten times bigger,
|
|
> SwitchLight 3.0 captures a broader range of lighting conditions,
|
|
> materials, and environments for more realistic results.
|
|
|
|
Detail quality claim:
|
|
> SwitchLight 3.0 achieves a new level of detail and visual clarity.
|
|
> Compared to 2.0, it captures **finer facial definition, intricate
|
|
> fabric wrinkles, and sophisticated surface patterns** with far
|
|
> greater accuracy.
|
|
|
|
Motion handling:
|
|
> SwitchLight 3.0 is a **true end-to-end video model** that
|
|
> understands motion natively. Unlike SwitchLight 2.0, which relied
|
|
> on an image model and post-smoothing, it eliminates **flicker and
|
|
> ghosting** even in extreme motion, shaking cameras, or vibrating
|
|
> subjects.
|
|
|
|
Source: https://beeble.ai/research/switchlight-3-0-is-here
|
|
|
|
|
|
## docs.beeble.ai/help/faq
|
|
|
|
"Is Beeble's AI trained responsibly?":
|
|
> **Yes.** Beeble's proprietary models--such as **SwitchLight**--are
|
|
> trained only on ethically sourced, agreement-based datasets. We
|
|
> never use scraped or unauthorized data.
|
|
>
|
|
> When open-source models are included, we choose them
|
|
> carefully--only those with published research papers that disclose
|
|
> their training data and carry valid commercial-use licenses.
|
|
|
|
"What is Video-to-VFX?":
|
|
> **Video-to-VFX** uses our foundation model, **SwitchLight 3.0**,
|
|
> and SOTA AI models to convert your footage into VFX-ready assets
|
|
> by generating:
|
|
> - **PBR Maps:** Normal, Base color, Metallic, Roughness, Specular
|
|
> for relighting
|
|
> - **Alpha:** foreground matte for background replacement
|
|
> - **Depth Map:** For compositing and 3D integration
|
|
|
|
Source: https://docs.beeble.ai/help/faq
|
|
|
|
|
|
## docs.beeble.ai/beeble-studio/video-to-vfx
|
|
|
|
Product documentation page:
|
|
> **Video-to-VFX** uses our foundation model, SwitchLight 3.0, to
|
|
> convert your footage into VFX-ready assets.
|
|
|
|
Key Features section:
|
|
> **PBR, Alpha & Depth Pass Generation**
|
|
> Powered by **SwitchLight 3.0**. Convert footage into full PBR
|
|
> passes with alpha and depth maps.
|
|
|
|
Source: https://docs.beeble.ai/beeble-studio/video-to-vfx
|
|
|
|
|
|
## Investor and press coverage
|
|
|
|
### Seed funding (July 2024)
|
|
|
|
Beeble raised a $4.75M seed round at a reported $25M valuation. The
|
|
round was led by Basis Set Ventures and Fika Ventures. At the time
|
|
of funding, the company had approximately 7 employees.
|
|
|
|
Press coverage from the funding round:
|
|
|
|
> Beeble [...] has raised $4.75 million in seed funding to develop
|
|
> its **foundational model** for AI-powered relighting in video.
|
|
|
|
Source: TechCrunch and other outlets, July 2024
|
|
|
|
Investor quotes (public press releases):
|
|
|
|
Basis Set Ventures described SwitchLight as a "world-class
|
|
foundational model in lighting" in their investment rationale.
|
|
|
|
The term "foundational model" is significant. In the AI industry,
|
|
it implies a large-scale, general-purpose model trained from scratch
|
|
on diverse data--models like GPT-4, DALL-E, or Stable Diffusion.
|
|
The technical evidence suggests Beeble's pipeline is a collection of
|
|
open-source models (some used directly, others used as architectural
|
|
building blocks) with proprietary weights trained on domain-specific
|
|
data. Whether this constitutes a "foundational model" is a
|
|
characterization question, but it is a characterization that was
|
|
used to secure investment.
|
|
|
|
As of January 2026, the company appears to have approximately 9
|
|
employees.
|
|
|
|
|
|
## Version history
|
|
|
|
| Version | Approximate date | Key changes |
|
|
|---------|-----------------|-------------|
|
|
| SwitchLight (mobile app) | 2022-2023 | Photo relighting app for iOS, 3M+ downloads claimed. Selfie/portrait focus. |
|
|
| SwitchLight 1.0 | Late 2023 - early 2024 | First VFX tool. Required alpha mask input. Isolated humans only. Architecture described in CVPR 2024 paper. Per-frame processing. |
|
|
| SwitchLight 2.0 | June 30, 2025 | "Complete architecture rebuild." No alpha mask required. Full-scene PBR maps (not just isolated subjects). Claimed 10x larger model, 13x more training data than 1.0. Still per-frame with post-processing deflicker. 2K resolution limit (cloud). 8-bit PNG output. User data training controversy (see below). |
|
|
| SwitchLight 3.0 | November 5, 2025 | Marketed as "true video model" with multi-frame processing. Claimed 10x more training data than 2.0 (130x more than 1.0). 4K resolution support. 16-bit EXR output. Desktop app (Beeble Studio) launched for local GPU processing. Paid users exempt from data training. |
|
|
|
|
Source: beeble.ai/research/switchlight-2-0-is-here,
|
|
beeble.ai/research/switchlight-3-0-is-here
|
|
|
|
|
|
## Team and leadership
|
|
|
|
Beeble was founded in 2022 in Seoul, South Korea by five co-founders
|
|
who previously worked at the AI research and machine learning team of
|
|
Krafton Inc., a South Korean game publisher.
|
|
|
|
**CEO: Hoon Kim**
|
|
- B.S. and M.S. in Electrical Engineering from KAIST (2012-2019)
|
|
- Research scientist at Lunit (medical AI, 2019-2020)
|
|
- Deep learning research scientist at Krafton Inc. (voice synthesis
|
|
team leader, 2020-2022)
|
|
- 6 peer-reviewed papers at ICLR, AAAI, ICML workshop
|
|
- Prior research was in autonomous driving (sim-to-real transfer,
|
|
vehicle collision prediction) and voice synthesis--not in computer
|
|
vision, relighting, or PBR decomposition
|
|
- The SwitchLight paper is his first publication in relighting
|
|
|
|
Source: gnsrla12.github.io/About-myself/
|
|
|
|
**Paper co-author: Sanghyun Woo** (last/senior author on the CVPR paper)
|
|
- Ph.D. from KAIST, currently Senior Research Scientist at Google
|
|
DeepMind
|
|
- Previously Faculty Fellow at NYU Courant (hosted by Saining Xie)
|
|
- Creator of CBAM (Convolutional Block Attention Module, ECCV 2018)
|
|
with 34,000+ citations
|
|
- Co-author on ConvNeXt V2 (CVPR 2023), Cambrian-1 (NeurIPS 2024 Oral)
|
|
- Listed as affiliated with NYU on the SwitchLight paper, not Beeble
|
|
- Whether his involvement extends beyond the CVPR 2024 paper is
|
|
unknown
|
|
|
|
Source: sites.google.com/view/sanghyunwoo/
|
|
|
|
**Team size:** 9 employees as of early 2026. With 5 co-founders, this
|
|
means approximately 4 non-founder employees.
|
|
|
|
|
|
## Products and pricing
|
|
|
|
**Beeble Cloud** (web app): Credit-based processing. Free tier
|
|
(15-second clips), Creator $19/month, Professional $75/month.
|
|
|
|
**Beeble Studio** (desktop app): Local GPU processing.
|
|
Indie $504/year ($42/month, for studios under $200K revenue),
|
|
Standard $3,000/year ($250/month).
|
|
|
|
**SwitchLight API**: Available at switchlight-api.beeble.ai for
|
|
developer integration.
|
|
|
|
**Mobile app**: SwitchLight photo editor on iOS, 3M+ downloads
|
|
claimed. This is a consumer selfie relighting app, not a professional
|
|
VFX tool.
|
|
|
|
**Plugins**: Nuke, Blender, Unreal Engine integration.
|
|
|
|
The original SwitchLight Studio product was shut down and merged into
|
|
Beeble Studio. The switchlight-studio.beeble.ai domain displays a
|
|
"Closing" notice.
|
|
|
|
Source: beeble.ai/pricing, beeble.ai/pricing-cloud
|
|
|
|
|
|
## User data training controversy
|
|
|
|
When SwitchLight 2.0 launched in mid-2025, CG Channel reported that
|
|
Beeble's terms of use allowed user-uploaded content to be used for AI
|
|
training. This caused significant backlash in the VFX community, where
|
|
studios are protective of proprietary footage.
|
|
|
|
Beeble responded by changing policy: paid subscribers' content is no
|
|
longer used for training (as of the SwitchLight 3.0 launch in November
|
|
2025). Free tier uploads may still be used. The Beeble Studio desktop
|
|
app was positioned as the privacy-focused alternative, with all
|
|
processing running locally.
|
|
|
|
Source: cgchannel.com/2025/11/beeble-launches-switchlight-3-0/
|
|
|
|
|
|
## Interview statements
|
|
|
|
In a September 2025 interview with Digital Production magazine
|
|
("We have to talk about Switchlight 2.0"), CEO Hoon Kim stated:
|
|
|
|
> At its core, it's a neural net doing the heavy lifting.
|
|
|
|
When asked for architecture details, he said he "couldn't share more
|
|
details beyond that."
|
|
|
|
On training data, Kim stated all data was "created in-house using
|
|
scans of real humans and objects" with "no movies, films, or
|
|
third-party content used."
|
|
|
|
Source: digitalproduction.com/2025/09/05/we-have-to-talk-about-switchlight-2-0/
|
|
|
|
|
|
## Production credits
|
|
|
|
Boxel Studio used SwitchLight for VFX relighting sequences on
|
|
*Superman & Lois*. This appears to be Beeble's most prominent
|
|
production credit.
|
|
|
|
Source: boxelstudio.com/beeble-switchlight/
|
|
|
|
|
|
## Patent filings
|
|
|
|
No patent applications or grants were found for Beeble Inc. or any of
|
|
its founders related to SwitchLight, relighting, or inverse rendering.
|
|
Searches were conducted on USPTO Patent Public Search and Google
|
|
Patents for "Beeble," "Hoon Kim," "SwitchLight," and "portrait
|
|
relighting neural network."
|
|
|
|
Note: Patent applications have an 18-month publication delay from
|
|
filing, so recent applications may not yet be visible.
|
|
|
|
Searched: January 2026
|
|
|
|
|
|
## CVPR paper reception
|
|
|
|
The SwitchLight paper was accepted as a CVPR 2024 highlight paper
|
|
(top ~10% of accepted papers out of 11,532 submissions). Beeble
|
|
claimed perfect 5/5/5 reviewer scores.
|
|
|
|
The paper is cited as "state-of-the-art" in IC-Light's ICLR 2025
|
|
paper and is referenced in the Awesome-Relighting curated list.
|
|
No public criticisms of the paper were found, though CVPR reviews
|
|
are confidential.
|
|
|
|
The paper has no associated code release. The beeble-ai/SwitchLight-
|
|
Studio GitHub repository contains only desktop application scripts
|
|
and integration helpers, not model code. For a CVPR highlight paper,
|
|
the absence of a code release is notable.
|
|
|
|
The arXiv version is licensed CC BY-NC-SA 4.0 (non-commercial).
|
|
|
|
Source: x.com/beeble_ai/status/1763564054529159548
|
|
|
|
|
|
## Community presence
|
|
|
|
As of January 2026, there is minimal Reddit discussion of Beeble or
|
|
SwitchLight on r/vfx, r/compositing, or r/NukeVFX. For a tool that
|
|
has been used on a major television production (Superman & Lois) and
|
|
claims 3M mobile app downloads, the lack of organic community
|
|
discussion is notable.
|
|
|
|
There is a separate, unrelated company called "Beeble" (based in
|
|
Latvia, offering encrypted email and cloud storage) that has an
|
|
AppSumo listing with poor reviews. This is not the same as Beeble AI
|
|
but creates brand confusion.
|
|
|
|
|
|
## Notable patterns
|
|
|
|
Beeble's marketing consistently attributes the entire Video-to-VFX
|
|
pipeline to SwitchLight 3.0. The Beeble Studio page states that PBR,
|
|
alpha, and depth map generation are all "Powered by SwitchLight 3.0."
|
|
|
|
The FAQ is the only place where Beeble acknowledges the use of
|
|
open-source models, stating they "choose them carefully" and select
|
|
those with "valid commercial-use licenses." However, the FAQ does not
|
|
name any of the specific open-source models used, nor does it clarify
|
|
which pipeline stages use open-source components versus SwitchLight.
|
|
|
|
The overall marketing impression is that SwitchLight is responsible
|
|
for all output passes. The technical reality, as documented in this
|
|
repository's analysis, is that background removal (alpha) and depth
|
|
estimation are produced by open-source models used off the shelf,
|
|
and the PBR decomposition models appear to be architecturally built
|
|
from open-source frameworks (segmentation_models_pytorch, timm
|
|
backbones) with proprietary trained weights. See
|
|
[docs/REPORT.md](../docs/REPORT.md) for the full analysis.
|