docs: add competitive landscape and deep dive findings
Version evolution (SL 1.0→2.0→3.0), team background, no patents, NVIDIA DiffusionRenderer as open-source competitor, dataset landscape (POLAR, SynthLight, etc.), botocore/AWS SDK in privacy app, MetaHuman EULA fix, user data controversy, and DiffusionRenderer ComfyUI integration across all docs.
This commit is contained in:
parent
7f5815a2b4
commit
86accadc28
21
README.md
21
README.md
@ -117,10 +117,31 @@ documented here are narrower:
|
|||||||
not appear to match the deployed application
|
not appear to match the deployed application
|
||||||
4. Missing license attribution required by MIT and Apache 2.0
|
4. Missing license attribution required by MIT and Apache 2.0
|
||||||
|
|
||||||
|
5. The CVPR 2024 paper describes SwitchLight 1.0; the shipped product
|
||||||
|
is SwitchLight 3.0, which went through at least two "complete
|
||||||
|
architecture rebuilds." The physics-based architecture (Cook-
|
||||||
|
Torrance, Normal Net, Specular Net) described in the paper may not
|
||||||
|
reflect the deployed product.
|
||||||
|
|
||||||
The first and fourth are correctable. The second is a question for
|
The first and fourth are correctable. The second is a question for
|
||||||
investors. The third is a question for the research community.
|
investors. The third is a question for the research community.
|
||||||
|
|
||||||
|
|
||||||
|
### Competitive landscape
|
||||||
|
|
||||||
|
The competitive moat described in Beeble's investor materials is
|
||||||
|
eroding rapidly. NVIDIA's DiffusionRenderer (CVPR 2025 Oral, open
|
||||||
|
source) performs video-to-PBR decomposition and relighting using video
|
||||||
|
diffusion models. Multiple research groups have demonstrated that
|
||||||
|
synthetic training data (Blender renders of 3D characters with known
|
||||||
|
PBR properties) produces results comparable to lightstage-trained
|
||||||
|
methods, without requiring proprietary lightstage captures.
|
||||||
|
|
||||||
|
No patent applications were found for Beeble or its founders related
|
||||||
|
to SwitchLight, relighting, or inverse rendering. The CVPR 2024 paper
|
||||||
|
has no associated code release.
|
||||||
|
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
This repository is licensed under
|
This repository is licensed under
|
||||||
|
|||||||
@ -223,21 +223,67 @@ roughness, metallic. Physical lightstage captures are one way to
|
|||||||
obtain this data, but modern synthetic rendering provides the same
|
obtain this data, but modern synthetic rendering provides the same
|
||||||
thing more cheaply and at greater scale:
|
thing more cheaply and at greater scale:
|
||||||
|
|
||||||
- **Unreal Engine MetaHumans**: photorealistic digital humans with
|
- **Blender character generators** (Human Generator, MB-Lab, MPFB2):
|
||||||
full PBR material definitions. Render them under varied lighting
|
|
||||||
and you have ground-truth PBR for each frame.
|
|
||||||
- **Blender character generators** (Human Generator, MB-Lab):
|
|
||||||
produce characters with known material properties that can be
|
produce characters with known material properties that can be
|
||||||
rendered procedurally.
|
rendered procedurally. Blender's Cycles renderer outputs physically
|
||||||
|
accurate PBR passes natively. Fully open source, no licensing
|
||||||
|
restrictions for AI training.
|
||||||
- **Houdini procedural pipelines**: can generate hundreds of
|
- **Houdini procedural pipelines**: can generate hundreds of
|
||||||
thousands of unique character/lighting/pose combinations
|
thousands of unique character/lighting/pose combinations
|
||||||
programmatically.
|
programmatically.
|
||||||
|
- ~~**Unreal Engine MetaHumans**~~: photorealistic digital humans
|
||||||
|
with full PBR material definitions. However, **the MetaHuman EULA
|
||||||
|
explicitly prohibits using MetaHumans as AI training data**: "You
|
||||||
|
must ensure that your activities with the Licensed Technology do
|
||||||
|
not result in using the Licensed Technology as a training input or
|
||||||
|
prompt-based input into any Generative AI Program." MetaHumans can
|
||||||
|
be used within AI-enhanced workflows but not to train AI models.
|
||||||
|
|
||||||
The ground truth is inherent in synthetic rendering: you created the
|
The ground truth is inherent in synthetic rendering: you created the
|
||||||
scene, so you already have the PBR maps. A VFX studio with a
|
scene, so you already have the PBR maps. A VFX studio with a
|
||||||
standard character pipeline could generate a training dataset in a
|
standard character pipeline could generate a training dataset in a
|
||||||
week.
|
week.
|
||||||
|
|
||||||
|
### Existing datasets and published results
|
||||||
|
|
||||||
|
The lightstage data advantage that the CVPR paper frames as a
|
||||||
|
competitive moat was real in 2023-2024. It is no longer.
|
||||||
|
|
||||||
|
**Public OLAT datasets now rival Beeble's scale:**
|
||||||
|
|
||||||
|
- **POLAR** (Dec 2025, public) -- 220 subjects, 156 light directions,
|
||||||
|
32 views, 4K, 28.8 million images total. Beeble's CVPR paper reports
|
||||||
|
287 subjects. POLAR is at 77% of that count, freely available.
|
||||||
|
https://rex0191.github.io/POLAR/
|
||||||
|
|
||||||
|
- **HumanOLAT** (ICCV 2025, public gated) -- 21 subjects, full body,
|
||||||
|
40 cameras at 6K, 331 LEDs. The first public full-body OLAT dataset.
|
||||||
|
https://vcai.mpi-inf.mpg.de/projects/HumanOLAT/
|
||||||
|
|
||||||
|
**Synthetic approaches already match lightstage quality:**
|
||||||
|
|
||||||
|
- **SynthLight** (Adobe/Yale, CVPR 2025) -- trained purely on ~350
|
||||||
|
synthetic 3D heads rendered in Blender with PBR materials. Achieves
|
||||||
|
results comparable to lightstage-trained methods on lightstage test
|
||||||
|
data. No lightstage data used at all.
|
||||||
|
https://vrroom.github.io/synthlight/
|
||||||
|
|
||||||
|
- **NVIDIA Lumos** (SIGGRAPH Asia 2022) -- rendered 300k synthetic
|
||||||
|
samples in a virtual lightstage. Matched state-of-the-art
|
||||||
|
lightstage methods three years ago.
|
||||||
|
|
||||||
|
- **OpenHumanBRDF** (July 2025) -- 147 human models with full PBR
|
||||||
|
decomposition including SSS, built in Blender. Exactly the kind
|
||||||
|
of dataset needed for training PBR decomposition models.
|
||||||
|
https://arxiv.org/abs/2507.18385
|
||||||
|
|
||||||
|
**Cost to replicate:** Generating a competitive synthetic dataset
|
||||||
|
costs approximately $4,500-$18,000 total (Blender + MPFB2 for
|
||||||
|
character generation, Cycles for rendering, cloud GPUs for compute).
|
||||||
|
Raw GPU compute for 100k PBR renders is approximately $55 on an A100.
|
||||||
|
CHORD (Ubisoft) trained its PBR decomposition model in 5.2 days on
|
||||||
|
a single H100, costing approximately $260-500 in compute.
|
||||||
|
|
||||||
With model sizes under 2 GB (based on the encrypted model files in
|
With model sizes under 2 GB (based on the encrypted model files in
|
||||||
Beeble's distribution) and standard encoder-decoder architectures,
|
Beeble's distribution) and standard encoder-decoder architectures,
|
||||||
the compute cost to train equivalent models from synthetic data is
|
the compute cost to train equivalent models from synthetic data is
|
||||||
@ -246,7 +292,8 @@ modest--well within reach of independent researchers or small studios.
|
|||||||
This does not mean Beeble's trained weights are worthless. But the
|
This does not mean Beeble's trained weights are worthless. But the
|
||||||
barrier to replication is lower than the marketing suggests,
|
barrier to replication is lower than the marketing suggests,
|
||||||
especially given that the model architectures are standard
|
especially given that the model architectures are standard
|
||||||
open-source frameworks.
|
open-source frameworks and equivalent training data is now publicly
|
||||||
|
available.
|
||||||
|
|
||||||
|
|
||||||
## 5. Relighting
|
## 5. Relighting
|
||||||
@ -256,13 +303,49 @@ relighting. This is the least well-characterized stage in our
|
|||||||
analysis--the relighting model's architecture could not be determined
|
analysis--the relighting model's architecture could not be determined
|
||||||
from the available evidence.
|
from the available evidence.
|
||||||
|
|
||||||
### AI-based relighting
|
### NVIDIA DiffusionRenderer (replaces both PBR decomposition AND relighting)
|
||||||
|
|
||||||
|
This is the most significant recent development. NVIDIA's
|
||||||
|
DiffusionRenderer does the same thing as Beeble's entire core
|
||||||
|
pipeline--video to PBR passes plus relighting--in a single open-source
|
||||||
|
system.
|
||||||
|
|
||||||
|
- **DiffusionRenderer** (NVIDIA, CVPR 2025 Oral--the highest honor)
|
||||||
|
-- a general-purpose method for both neural inverse and forward
|
||||||
|
rendering. Two modes:
|
||||||
|
- **Inverse**: input image/video → geometry and material buffers
|
||||||
|
(albedo, normals, roughness, metallic)
|
||||||
|
- **Forward**: G-buffers + environment map → photorealistic relit
|
||||||
|
output
|
||||||
|
|
||||||
|
The upgraded **Cosmos DiffusionRenderer** (June 2025) brings
|
||||||
|
improved quality powered by NVIDIA Cosmos video foundation models.
|
||||||
|
|
||||||
|
GitHub: https://github.com/nv-tlabs/cosmos-transfer1-diffusion-renderer
|
||||||
|
Academic version: https://github.com/nv-tlabs/diffusion-renderer
|
||||||
|
Weights: https://huggingface.co/collections/zianw/cosmos-diffusionrenderer-6849f2a4da267e55409b8125
|
||||||
|
**License: Apache 2.0 (code), NVIDIA Open Model License (weights)**
|
||||||
|
|
||||||
|
Hardware: approximately 16GB VRAM recommended.
|
||||||
|
|
||||||
|
**ComfyUI integration**: A community wrapper exists at
|
||||||
|
https://github.com/eggsbenedicto/DiffusionRenderer-ComfyUI
|
||||||
|
(experimental, Linux tested). Requires downloading the Cosmos
|
||||||
|
DiffusionRenderer checkpoints and NVIDIA Video Tokenizer
|
||||||
|
(Cosmos-1.0-Tokenizer-CV8x8x8).
|
||||||
|
|
||||||
|
This is a direct, open-source replacement for Beeble's core value
|
||||||
|
proposition, backed by NVIDIA's resources and published as the
|
||||||
|
highest-rated paper at CVPR 2025.
|
||||||
|
|
||||||
|
### IC-Light (image relighting)
|
||||||
|
|
||||||
- **IC-Light** (ICLR 2025, by lllyasviel / ControlNet creator) --
|
- **IC-Light** (ICLR 2025, by lllyasviel / ControlNet creator) --
|
||||||
the leading open-source relighting model. Two modes: text-conditioned
|
the leading open-source image relighting model. Two modes:
|
||||||
(describe the target lighting) and background-conditioned (provide a
|
text-conditioned (describe the target lighting) and
|
||||||
background image whose lighting should be matched). Based on Stable
|
background-conditioned (provide a background image whose lighting
|
||||||
Diffusion.
|
should be matched). Based on Stable Diffusion. V2 available with
|
||||||
|
16-channel VAE.
|
||||||
GitHub: https://github.com/lllyasviel/IC-Light
|
GitHub: https://github.com/lllyasviel/IC-Light
|
||||||
|
|
||||||
IC-Light uses diffusion-based lighting transfer rather than
|
IC-Light uses diffusion-based lighting transfer rather than
|
||||||
@ -330,7 +413,8 @@ model = timm.create_model('vit_large_patch14_dinov2.lvd142m',
|
|||||||
| Metallic | SMP + timm backbone (proprietary weights) | CHORD / RGB-X | Weaker for portraits |
|
| Metallic | SMP + timm backbone (proprietary weights) | CHORD / RGB-X | Weaker for portraits |
|
||||||
| Specular | SMP + timm backbone (proprietary weights) | CHORD / RGB-X | Weaker for portraits |
|
| Specular | SMP + timm backbone (proprietary weights) | CHORD / RGB-X | Weaker for portraits |
|
||||||
| Super resolution | RRDB-Net (open source) | ESRGAN / Real-ESRGAN | Identical (same model) |
|
| Super resolution | RRDB-Net (open source) | ESRGAN / Real-ESRGAN | Identical (same model) |
|
||||||
| Relighting | Proprietary (not fully characterized) | IC-Light / manual | Different approach |
|
| Relighting | Proprietary (not fully characterized) | DiffusionRenderer / IC-Light / manual | Comparable (DiffusionRenderer) |
|
||||||
|
| Full inverse+forward rendering | Entire pipeline | DiffusionRenderer (NVIDIA, CVPR 2025) | Direct open-source competitor |
|
||||||
|
|
||||||
The "Beeble model" column reflects what was found in the application
|
The "Beeble model" column reflects what was found in the application
|
||||||
binary, not what the CVPR paper describes. See
|
binary, not what the CVPR paper describes. See
|
||||||
@ -350,6 +434,15 @@ models were trained on material textures and interior scenes. However,
|
|||||||
as discussed above, the barrier to creating equivalent training data
|
as discussed above, the barrier to creating equivalent training data
|
||||||
using synthetic rendering is lower than commonly assumed.
|
using synthetic rendering is lower than commonly assumed.
|
||||||
|
|
||||||
|
Where DiffusionRenderer changes the picture: NVIDIA's
|
||||||
|
DiffusionRenderer (CVPR 2025 Oral) handles both inverse rendering
|
||||||
|
(video → PBR maps) and forward rendering (PBR maps + lighting →
|
||||||
|
relit output) in a single open-source system. This is the first
|
||||||
|
open-source tool that directly replicates Beeble's entire core
|
||||||
|
pipeline, including relighting. It is backed by NVIDIA's resources,
|
||||||
|
uses Apache 2.0 licensing for code, and has a ComfyUI integration
|
||||||
|
available.
|
||||||
|
|
||||||
Where open-source wins on flexibility: manual relighting in
|
Where open-source wins on flexibility: manual relighting in
|
||||||
Blender/Nuke with the extracted PBR passes gives full artistic control
|
Blender/Nuke with the extracted PBR passes gives full artistic control
|
||||||
that Beeble's automated pipeline does not offer.
|
that Beeble's automated pipeline does not offer.
|
||||||
@ -369,9 +462,11 @@ need high-quality material properties, Beeble's model still has an
|
|||||||
edge due to its portrait-specific training data. But the gap is
|
edge due to its portrait-specific training data. But the gap is
|
||||||
narrowing as models like CHORD improve.
|
narrowing as models like CHORD improve.
|
||||||
|
|
||||||
If you use Beeble for one-click relighting, IC-Light provides a
|
If you use Beeble for one-click relighting, NVIDIA's
|
||||||
different but functional alternative, and manual PBR relighting in
|
DiffusionRenderer is a direct open-source competitor that handles both
|
||||||
Blender/Nuke gives you more control.
|
PBR decomposition and relighting in a single system. IC-Light provides
|
||||||
|
a diffusion-based alternative, and manual PBR relighting in
|
||||||
|
Blender/Nuke gives you full artistic control.
|
||||||
|
|
||||||
The core value proposition of Beeble Studio--beyond the models
|
The core value proposition of Beeble Studio--beyond the models
|
||||||
themselves--is convenience. It packages everything into a single
|
themselves--is convenience. It packages everything into a single
|
||||||
|
|||||||
@ -235,6 +235,14 @@ into which tracker variant Beeble uses, the exact license obligation
|
|||||||
cannot be determined. If an AGPL-3.0 tracker is used, the license
|
cannot be determined. If an AGPL-3.0 tracker is used, the license
|
||||||
requirements would be significantly more restrictive than MIT.
|
requirements would be significantly more restrictive than MIT.
|
||||||
|
|
||||||
|
If an AGPL-3.0 tracker is used, the implications would extend far
|
||||||
|
beyond attribution. AGPL-3.0 requires making the complete source code
|
||||||
|
of the incorporating application available to users--effectively
|
||||||
|
requiring Beeble to open-source its entire application. This is among
|
||||||
|
the most restrictive open-source licenses and represents a
|
||||||
|
significantly different risk profile than the MIT/Apache non-compliance
|
||||||
|
discussed elsewhere in this document.
|
||||||
|
|
||||||
### DexiNed (edge detection)
|
### DexiNed (edge detection)
|
||||||
|
|
||||||
- **Repository**: https://github.com/xavysp/DexiNed
|
- **Repository**: https://github.com/xavysp/DexiNed
|
||||||
|
|||||||
@ -113,6 +113,14 @@ website, documentation, FAQ, and research pages were reviewed to
|
|||||||
understand how the technology is described to users. All public
|
understand how the technology is described to users. All public
|
||||||
claims were archived with URLs and timestamps.
|
claims were archived with URLs and timestamps.
|
||||||
|
|
||||||
|
The manifest confirms Python 3.11 as the runtime (via the presence of
|
||||||
|
`libpython3.11.so.1.0` in the downloaded files). TensorRT 10.12.0 was
|
||||||
|
also identified, and notably, builder resources are present alongside
|
||||||
|
the runtime--not just inference libraries. The presence of TensorRT
|
||||||
|
builder components suggests possible on-device model compilation,
|
||||||
|
meaning TensorRT engines may be compiled locally on the user's GPU
|
||||||
|
rather than shipped as pre-built binaries.
|
||||||
|
|
||||||
|
|
||||||
## What was not done
|
## What was not done
|
||||||
|
|
||||||
|
|||||||
110
docs/REPORT.md
110
docs/REPORT.md
@ -15,9 +15,14 @@ maps, base color, roughness, specular, and metallic passes, along
|
|||||||
with AI-driven relighting capabilities.
|
with AI-driven relighting capabilities.
|
||||||
|
|
||||||
Beeble markets its pipeline as being "Powered by SwitchLight 3.0,"
|
Beeble markets its pipeline as being "Powered by SwitchLight 3.0,"
|
||||||
their proprietary video-to-PBR model published at CVPR 2024. The
|
their proprietary video-to-PBR model. The original SwitchLight
|
||||||
application is sold as a subscription product, with plans starting
|
architecture was published at CVPR 2024 as a highlight paper (top
|
||||||
at $42/month.
|
~10% of accepted papers), but that paper describes SwitchLight 1.0.
|
||||||
|
The product has since gone through at least two major rebuilds:
|
||||||
|
SwitchLight 2.0 (June 2025, described by Beeble as a "complete
|
||||||
|
architecture rebuild") and SwitchLight 3.0 (November 2025, marketed
|
||||||
|
as a "true video model"). The application is sold as a subscription
|
||||||
|
product, with plans starting at $42/month.
|
||||||
|
|
||||||
This analysis was prompted by observing that several of Beeble
|
This analysis was prompted by observing that several of Beeble
|
||||||
Studio's output passes closely resemble the outputs of well-known
|
Studio's output passes closely resemble the outputs of well-known
|
||||||
@ -423,8 +428,18 @@ standard in ML applications:
|
|||||||
| Flet | Apache 2.0 | Cross-platform GUI framework |
|
| Flet | Apache 2.0 | Cross-platform GUI framework |
|
||||||
| SoftHSM2 / PKCS#11 | BSD 2-Clause | License token validation |
|
| SoftHSM2 / PKCS#11 | BSD 2-Clause | License token validation |
|
||||||
| OpenSSL 1.1 | Apache 2.0 | Cryptographic operations |
|
| OpenSSL 1.1 | Apache 2.0 | Cryptographic operations |
|
||||||
|
| botocore (AWS SDK) | Apache 2.0 | Cloud connectivity (1,823 service files) |
|
||||||
|
|
||||||
Two entries deserve mention. **Pyarmor** (runtime ID
|
Three entries deserve mention. **botocore** is the AWS SDK core
|
||||||
|
library, bundled with 1,823 service definition files covering 400+
|
||||||
|
AWS services. For an application whose product page states "Your
|
||||||
|
files never leave your machine," the presence of the full AWS SDK
|
||||||
|
raises questions about what network connectivity the application
|
||||||
|
maintains. This analysis did not perform network monitoring to
|
||||||
|
determine what connections, if any, the application makes during
|
||||||
|
normal operation.
|
||||||
|
|
||||||
|
**Pyarmor** (runtime ID
|
||||||
`pyarmor_runtime_007423`) is used to encrypt all of Beeble's custom
|
`pyarmor_runtime_007423`) is used to encrypt all of Beeble's custom
|
||||||
Python code--every proprietary module is obfuscated with randomized
|
Python code--every proprietary module is obfuscated with randomized
|
||||||
names and encrypted bytecode. This prevents static analysis of how
|
names and encrypted bytecode. This prevents static analysis of how
|
||||||
@ -456,6 +471,17 @@ This is presented as a unified system where intrinsic decomposition
|
|||||||
step in the relighting pipeline. The paper's novelty claim rests
|
step in the relighting pipeline. The paper's novelty claim rests
|
||||||
partly on this physics-driven architecture.
|
partly on this physics-driven architecture.
|
||||||
|
|
||||||
|
An important caveat: the CVPR paper describes SwitchLight 1.0.
|
||||||
|
The shipped product is SwitchLight 3.0, which Beeble says went
|
||||||
|
through two major rebuilds. SwitchLight 2.0 (June 2025) was described
|
||||||
|
as a "complete architecture rebuild" that removed the alpha mask
|
||||||
|
requirement and extended from isolated humans to full scenes.
|
||||||
|
SwitchLight 3.0 (November 2025) was described as a "true video
|
||||||
|
model" with multi-frame processing, replacing the per-frame
|
||||||
|
architecture. The paper's physics-based architecture may not reflect
|
||||||
|
what is currently deployed. The binary analysis that follows applies
|
||||||
|
to the deployed product, not the CVPR paper.
|
||||||
|
|
||||||
### 4.2 What the binary contains
|
### 4.2 What the binary contains
|
||||||
|
|
||||||
A thorough string search of the 2GB process memory dump and the 56MB
|
A thorough string search of the 2GB process memory dump and the 56MB
|
||||||
@ -604,7 +630,7 @@ available evidence, not as a certainty.
|
|||||||
Beeble uses two layers of protection to obscure its pipeline:
|
Beeble uses two layers of protection to obscure its pipeline:
|
||||||
|
|
||||||
**Model encryption.** The six model files are stored as `.enc`
|
**Model encryption.** The six model files are stored as `.enc`
|
||||||
files encrypted with AES. They total 4.4 GB:
|
files encrypted with AES. They total 4.3 GB:
|
||||||
|
|
||||||
| File | Size |
|
| File | Size |
|
||||||
|------|------|
|
|------|------|
|
||||||
@ -762,16 +788,58 @@ replicating each stage of the pipeline with open-source tools.
|
|||||||
There is a common assumption that the training data represents a
|
There is a common assumption that the training data represents a
|
||||||
significant barrier to replication--that lightstage captures are
|
significant barrier to replication--that lightstage captures are
|
||||||
expensive and rare, and therefore the trained weights are uniquely
|
expensive and rare, and therefore the trained weights are uniquely
|
||||||
valuable. This may overstate the difficulty. For PBR decomposition
|
valuable. As of late 2025, this assumption is increasingly difficult
|
||||||
training, what you need is a dataset of images paired with
|
to sustain.
|
||||||
ground-truth PBR maps (albedo, normal, roughness, metallic). Modern
|
|
||||||
3D character pipelines--Unreal Engine MetaHumans, Blender character
|
Multiple public datasets now provide the kind of paired image +
|
||||||
generators, procedural systems in Houdini--can render hundreds of
|
ground-truth PBR data needed for training:
|
||||||
thousands of such pairs with varied poses, lighting, skin tones, and
|
|
||||||
clothing. The ground truth is inherent: you created the scene, so you
|
- **POLAR** (December 2025): 220 subjects, 156 light directions, 32
|
||||||
already have the PBR maps. With model sizes under 2 GB and standard
|
views, 4K resolution, 28.8 million images. This is comparable in
|
||||||
encoder-decoder architectures, the compute cost to train equivalent
|
scale to the 287 subjects cited in Beeble's CVPR paper.
|
||||||
models from synthetic data is modest.
|
- **HumanOLAT** (ICCV 2025): the first public full-body lightstage
|
||||||
|
dataset, 21 subjects, 331 OLAT lighting conditions.
|
||||||
|
- **OpenHumanBRDF** (July 2025): 147 human models with full PBR
|
||||||
|
properties (diffuse, specular, SSS) in Blender.
|
||||||
|
- **MatSynth** (CVPR 2024): 433 GB of CC0/CC-BY PBR material maps,
|
||||||
|
used to train Ubisoft's CHORD model.
|
||||||
|
|
||||||
|
Published results further undermine the lightstage data moat.
|
||||||
|
**SynthLight** (CVPR 2025) trained purely on ~350 synthetic Blender
|
||||||
|
heads and matched the quality of lightstage-trained methods. **NVIDIA
|
||||||
|
Lumos** (SIGGRAPH Asia 2022) matched state-of-the-art with 300,000
|
||||||
|
synthetic samples. **DiFaReli++** outperformed lightstage baselines
|
||||||
|
using only 2D internet images.
|
||||||
|
|
||||||
|
The cost estimates are modest. Ubisoft's CHORD model was trained in
|
||||||
|
5.2 days on a single H100 GPU (~$260-500 in cloud compute). A full
|
||||||
|
replication effort--synthetic dataset generation plus model training
|
||||||
|
--has been estimated at $4,500-$18,000, a fraction of Beeble's $4.75M
|
||||||
|
seed round.
|
||||||
|
|
||||||
|
Note: Unreal Engine MetaHumans, while visually excellent, cannot
|
||||||
|
legally be used for AI training. Epic's MetaHuman EULA explicitly
|
||||||
|
prohibits "using the Licensed Technology as a training input...into
|
||||||
|
any Generative AI Program." Blender with the MPFB2 plugin is a
|
||||||
|
viable alternative for synthetic data generation without license
|
||||||
|
restrictions.
|
||||||
|
|
||||||
|
The competitive landscape shifted significantly in 2025. NVIDIA's
|
||||||
|
**DiffusionRenderer** (CVPR 2025 Oral--the highest honor) performs
|
||||||
|
both inverse rendering (video → PBR maps) and forward rendering (PBR
|
||||||
|
maps + lighting → relit output) using video diffusion models. It is
|
||||||
|
open source (Apache 2.0 code, NVIDIA Open Model License for weights)
|
||||||
|
and has a ComfyUI integration. This is the first open-source system
|
||||||
|
that directly replicates Beeble's entire core pipeline, including
|
||||||
|
relighting, backed by NVIDIA's resources. See
|
||||||
|
[COMFYUI_GUIDE.md](COMFYUI_GUIDE.md) for integration details.
|
||||||
|
|
||||||
|
No patent applications were found for Beeble or its founders related
|
||||||
|
to SwitchLight, relighting, or inverse rendering (searched USPTO and
|
||||||
|
Google Patents, January 2026; note the 18-month publication delay for
|
||||||
|
recent filings). The CVPR 2024 paper has no associated code release.
|
||||||
|
Together with the architecture findings in section 4, this suggests
|
||||||
|
limited defensibility against open-source replication.
|
||||||
|
|
||||||
None of this means Beeble has no value. Convenience, polish, and
|
None of this means Beeble has no value. Convenience, polish, and
|
||||||
integration are real things people pay for. But the gap between
|
integration are real things people pay for. But the gap between
|
||||||
@ -824,12 +892,12 @@ Electron app's 667 JavaScript files. It is a marketing name that
|
|||||||
refers to no identifiable software component.
|
refers to no identifiable software component.
|
||||||
|
|
||||||
The CVPR 2024 paper describes a physics-based inverse rendering
|
The CVPR 2024 paper describes a physics-based inverse rendering
|
||||||
architecture. The deployed application contains no evidence of
|
architecture for SwitchLight 1.0. The deployed product is SwitchLight
|
||||||
physics-based rendering code at inference time. The most likely
|
3.0, which went through at least two "complete architecture rebuilds."
|
||||||
explanation is that the physics (Cook-Torrance rendering) was used
|
The application contains no evidence of physics-based rendering code
|
||||||
during training as a loss function, and the deployed model is a
|
at inference time. This could mean the physics (Cook-Torrance
|
||||||
standard feedforward network that learned to predict PBR channels
|
rendering) was used during training as a loss function, that the
|
||||||
from that training process.
|
architecture was replaced during the rebuilds, or both.
|
||||||
|
|
||||||
Beeble's marketing attributes the entire pipeline to SwitchLight
|
Beeble's marketing attributes the entire pipeline to SwitchLight
|
||||||
3.0. The evidence shows that alpha mattes come from InSPyReNet, depth
|
3.0. The evidence shows that alpha mattes come from InSPyReNet, depth
|
||||||
|
|||||||
@ -234,6 +234,13 @@ When running these commands, look for:
|
|||||||
directory versus total packages reveals the scope of missing
|
directory versus total packages reveals the scope of missing
|
||||||
attribution
|
attribution
|
||||||
|
|
||||||
|
- **AWS SDK presence**: The application bundles botocore with 1,823
|
||||||
|
service definition files for 400+ AWS services. For an application
|
||||||
|
that claims "Your files never leave your machine," the presence of
|
||||||
|
the full AWS SDK raises questions about network connectivity.
|
||||||
|
Network monitoring during normal operation would reveal what
|
||||||
|
connections the application makes.
|
||||||
|
|
||||||
|
|
||||||
## What not to do
|
## What not to do
|
||||||
|
|
||||||
|
|||||||
@ -152,6 +152,171 @@ As of January 2026, the company appears to have approximately 9
|
|||||||
employees.
|
employees.
|
||||||
|
|
||||||
|
|
||||||
|
## Version history
|
||||||
|
|
||||||
|
| Version | Approximate date | Key changes |
|
||||||
|
|---------|-----------------|-------------|
|
||||||
|
| SwitchLight (mobile app) | 2022-2023 | Photo relighting app for iOS, 3M+ downloads claimed. Selfie/portrait focus. |
|
||||||
|
| SwitchLight 1.0 | Late 2023 - early 2024 | First VFX tool. Required alpha mask input. Isolated humans only. Architecture described in CVPR 2024 paper. Per-frame processing. |
|
||||||
|
| SwitchLight 2.0 | June 30, 2025 | "Complete architecture rebuild." No alpha mask required. Full-scene PBR maps (not just isolated subjects). Claimed 10x larger model, 13x more training data than 1.0. Still per-frame with post-processing deflicker. 2K resolution limit (cloud). 8-bit PNG output. User data training controversy (see below). |
|
||||||
|
| SwitchLight 3.0 | November 5, 2025 | Marketed as "true video model" with multi-frame processing. Claimed 10x more training data than 2.0 (130x more than 1.0). 4K resolution support. 16-bit EXR output. Desktop app (Beeble Studio) launched for local GPU processing. Paid users exempt from data training. |
|
||||||
|
|
||||||
|
Source: beeble.ai/research/switchlight-2-0-is-here,
|
||||||
|
beeble.ai/research/switchlight-3-0-is-here
|
||||||
|
|
||||||
|
|
||||||
|
## Team and leadership
|
||||||
|
|
||||||
|
Beeble was founded in 2022 in Seoul, South Korea by five co-founders
|
||||||
|
who previously worked at the AI research and machine learning team of
|
||||||
|
Krafton Inc., a South Korean game publisher.
|
||||||
|
|
||||||
|
**CEO: Hoon Kim**
|
||||||
|
- B.S. and M.S. in Electrical Engineering from KAIST (2012-2019)
|
||||||
|
- Research scientist at Lunit (medical AI, 2019-2020)
|
||||||
|
- Deep learning research scientist at Krafton Inc. (voice synthesis
|
||||||
|
team leader, 2020-2022)
|
||||||
|
- 6 peer-reviewed papers at ICLR, AAAI, ICML workshop
|
||||||
|
- Prior research was in autonomous driving (sim-to-real transfer,
|
||||||
|
vehicle collision prediction) and voice synthesis--not in computer
|
||||||
|
vision, relighting, or PBR decomposition
|
||||||
|
- The SwitchLight paper is his first publication in relighting
|
||||||
|
|
||||||
|
Source: gnsrla12.github.io/About-myself/
|
||||||
|
|
||||||
|
**Paper co-author: Sanghyun Woo** (last/senior author on the CVPR paper)
|
||||||
|
- Ph.D. from KAIST, currently Senior Research Scientist at Google
|
||||||
|
DeepMind
|
||||||
|
- Previously Faculty Fellow at NYU Courant (hosted by Saining Xie)
|
||||||
|
- Creator of CBAM (Convolutional Block Attention Module, ECCV 2018)
|
||||||
|
with 34,000+ citations
|
||||||
|
- Co-author on ConvNeXt V2 (CVPR 2023), Cambrian-1 (NeurIPS 2024 Oral)
|
||||||
|
- Listed as affiliated with NYU on the SwitchLight paper, not Beeble
|
||||||
|
- Whether his involvement extends beyond the CVPR 2024 paper is
|
||||||
|
unknown
|
||||||
|
|
||||||
|
Source: sites.google.com/view/sanghyunwoo/
|
||||||
|
|
||||||
|
**Team size:** 9 employees as of early 2026. With 5 co-founders, this
|
||||||
|
means approximately 4 non-founder employees.
|
||||||
|
|
||||||
|
|
||||||
|
## Products and pricing
|
||||||
|
|
||||||
|
**Beeble Cloud** (web app): Credit-based processing. Free tier
|
||||||
|
(15-second clips), Creator $19/month, Professional $75/month.
|
||||||
|
|
||||||
|
**Beeble Studio** (desktop app): Local GPU processing.
|
||||||
|
Indie $504/year ($42/month, for studios under $200K revenue),
|
||||||
|
Standard $3,000/year ($250/month).
|
||||||
|
|
||||||
|
**SwitchLight API**: Available at switchlight-api.beeble.ai for
|
||||||
|
developer integration.
|
||||||
|
|
||||||
|
**Mobile app**: SwitchLight photo editor on iOS, 3M+ downloads
|
||||||
|
claimed. This is a consumer selfie relighting app, not a professional
|
||||||
|
VFX tool.
|
||||||
|
|
||||||
|
**Plugins**: Nuke, Blender, Unreal Engine integration.
|
||||||
|
|
||||||
|
The original SwitchLight Studio product was shut down and merged into
|
||||||
|
Beeble Studio. The switchlight-studio.beeble.ai domain displays a
|
||||||
|
"Closing" notice.
|
||||||
|
|
||||||
|
Source: beeble.ai/pricing, beeble.ai/pricing-cloud
|
||||||
|
|
||||||
|
|
||||||
|
## User data training controversy
|
||||||
|
|
||||||
|
When SwitchLight 2.0 launched in mid-2025, CG Channel reported that
|
||||||
|
Beeble's terms of use allowed user-uploaded content to be used for AI
|
||||||
|
training. This caused significant backlash in the VFX community, where
|
||||||
|
studios are protective of proprietary footage.
|
||||||
|
|
||||||
|
Beeble responded by changing policy: paid subscribers' content is no
|
||||||
|
longer used for training (as of the SwitchLight 3.0 launch in November
|
||||||
|
2025). Free tier uploads may still be used. The Beeble Studio desktop
|
||||||
|
app was positioned as the privacy-focused alternative, with all
|
||||||
|
processing running locally.
|
||||||
|
|
||||||
|
Source: cgchannel.com/2025/11/beeble-launches-switchlight-3-0/
|
||||||
|
|
||||||
|
|
||||||
|
## Interview statements
|
||||||
|
|
||||||
|
In a September 2025 interview with Digital Production magazine
|
||||||
|
("We have to talk about Switchlight 2.0"), CEO Hoon Kim stated:
|
||||||
|
|
||||||
|
> At its core, it's a neural net doing the heavy lifting.
|
||||||
|
|
||||||
|
When asked for architecture details, he said he "couldn't share more
|
||||||
|
details beyond that."
|
||||||
|
|
||||||
|
On training data, Kim stated all data was "created in-house using
|
||||||
|
scans of real humans and objects" with "no movies, films, or
|
||||||
|
third-party content used."
|
||||||
|
|
||||||
|
Source: digitalproduction.com/2025/09/05/we-have-to-talk-about-switchlight-2-0/
|
||||||
|
|
||||||
|
|
||||||
|
## Production credits
|
||||||
|
|
||||||
|
Boxel Studio used SwitchLight for VFX relighting sequences on
|
||||||
|
*Superman & Lois*. This appears to be Beeble's most prominent
|
||||||
|
production credit.
|
||||||
|
|
||||||
|
Source: boxelstudio.com/beeble-switchlight/
|
||||||
|
|
||||||
|
|
||||||
|
## Patent filings
|
||||||
|
|
||||||
|
No patent applications or grants were found for Beeble Inc. or any of
|
||||||
|
its founders related to SwitchLight, relighting, or inverse rendering.
|
||||||
|
Searches were conducted on USPTO Patent Public Search and Google
|
||||||
|
Patents for "Beeble," "Hoon Kim," "SwitchLight," and "portrait
|
||||||
|
relighting neural network."
|
||||||
|
|
||||||
|
Note: Patent applications have an 18-month publication delay from
|
||||||
|
filing, so recent applications may not yet be visible.
|
||||||
|
|
||||||
|
Searched: January 2026
|
||||||
|
|
||||||
|
|
||||||
|
## CVPR paper reception
|
||||||
|
|
||||||
|
The SwitchLight paper was accepted as a CVPR 2024 highlight paper
|
||||||
|
(top ~10% of accepted papers out of 11,532 submissions). Beeble
|
||||||
|
claimed perfect 5/5/5 reviewer scores.
|
||||||
|
|
||||||
|
The paper is cited as "state-of-the-art" in IC-Light's ICLR 2025
|
||||||
|
paper and is referenced in the Awesome-Relighting curated list.
|
||||||
|
No public criticisms of the paper were found, though CVPR reviews
|
||||||
|
are confidential.
|
||||||
|
|
||||||
|
The paper has no associated code release. The beeble-ai/SwitchLight-
|
||||||
|
Studio GitHub repository contains only desktop application scripts
|
||||||
|
and integration helpers, not model code. For a CVPR highlight paper,
|
||||||
|
the absence of a code release is notable.
|
||||||
|
|
||||||
|
The arXiv version is licensed CC BY-NC-SA 4.0 (non-commercial).
|
||||||
|
|
||||||
|
Source: x.com/beeble_ai/status/1763564054529159548
|
||||||
|
|
||||||
|
|
||||||
|
## Community presence
|
||||||
|
|
||||||
|
As of January 2026, there is minimal Reddit discussion of Beeble or
|
||||||
|
SwitchLight on r/vfx, r/compositing, or r/NukeVFX. For a tool that
|
||||||
|
has been used on a major television production (Superman & Lois) and
|
||||||
|
claims 3M mobile app downloads, the lack of organic community
|
||||||
|
discussion is notable.
|
||||||
|
|
||||||
|
There is a separate, unrelated company called "Beeble" (based in
|
||||||
|
Latvia, offering encrypted email and cloud storage) that has an
|
||||||
|
AppSumo listing with poor reviews. This is not the same as Beeble AI
|
||||||
|
but creates brand confusion.
|
||||||
|
|
||||||
|
|
||||||
## Notable patterns
|
## Notable patterns
|
||||||
|
|
||||||
Beeble's marketing consistently attributes the entire Video-to-VFX
|
Beeble's marketing consistently attributes the entire Video-to-VFX
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user