inital commit
This commit is contained in:
commit
7f5815a2b4
23
.gitignore
vendored
Normal file
23
.gitignore
vendored
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
# Binary artifacts (too large for git)
|
||||||
|
*.bin
|
||||||
|
*.enc
|
||||||
|
*.so
|
||||||
|
*.gpr
|
||||||
|
*.rep/
|
||||||
|
|
||||||
|
# Memory dumps
|
||||||
|
memory_dumps/
|
||||||
|
extracted_models/
|
||||||
|
|
||||||
|
# Python
|
||||||
|
__pycache__/
|
||||||
|
*.pyc
|
||||||
|
.ruff_cache/
|
||||||
|
|
||||||
|
# Node
|
||||||
|
node_modules/
|
||||||
|
package-lock.json
|
||||||
|
|
||||||
|
# Misc
|
||||||
|
.temp/
|
||||||
|
.sisyphus/
|
||||||
27
LICENSE
Normal file
27
LICENSE
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
Creative Commons Attribution 4.0 International License
|
||||||
|
|
||||||
|
Copyright (c) 2026
|
||||||
|
|
||||||
|
This work is licensed under the Creative Commons Attribution 4.0
|
||||||
|
International License. To view a copy of this license, visit
|
||||||
|
http://creativecommons.org/licenses/by/4.0/ or send a letter to
|
||||||
|
Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
|
||||||
|
|
||||||
|
You are free to:
|
||||||
|
|
||||||
|
Share - copy and redistribute the material in any medium or format
|
||||||
|
for any purpose, including commercially.
|
||||||
|
|
||||||
|
Adapt - remix, transform, and build upon the material for any
|
||||||
|
purpose, including commercially.
|
||||||
|
|
||||||
|
Under the following terms:
|
||||||
|
|
||||||
|
Attribution - You must give appropriate credit, provide a link to
|
||||||
|
the license, and indicate if changes were made. You may do so in
|
||||||
|
any reasonable manner, but not in any way that suggests the licensor
|
||||||
|
endorses you or your use.
|
||||||
|
|
||||||
|
No additional restrictions - You may not apply legal terms or
|
||||||
|
technological measures that legally restrict others from doing
|
||||||
|
anything the license permits.
|
||||||
127
README.md
Normal file
127
README.md
Normal file
@ -0,0 +1,127 @@
|
|||||||
|
# Beeble Studio: Technical Analysis
|
||||||
|
|
||||||
|
An independent technical analysis of the Beeble Studio desktop
|
||||||
|
application, examining which AI models power its Video-to-VFX
|
||||||
|
pipeline.
|
||||||
|
|
||||||
|
|
||||||
|
## Findings
|
||||||
|
|
||||||
|
Beeble's product page states that PBR, alpha, and depth map
|
||||||
|
generation are "Powered by SwitchLight 3.0." Analysis of the
|
||||||
|
application reveals a more nuanced picture:
|
||||||
|
|
||||||
|
| Pipeline stage | What Beeble says | What the application contains |
|
||||||
|
|-------------|-----------------|-------------------------------|
|
||||||
|
| Alpha matte (background removal) | "Powered by SwitchLight 3.0" | `transparent-background` / InSPyReNet (MIT) |
|
||||||
|
| Depth map | "Powered by SwitchLight 3.0" | Depth Anything V2 via Kornia (Apache 2.0) |
|
||||||
|
| Person detection | Not mentioned | RT-DETR + PP-HGNet via Kornia (Apache 2.0) |
|
||||||
|
| Face detection | Not mentioned | Kornia face detection (Apache 2.0) |
|
||||||
|
| Multi-object tracking | Not mentioned | BoxMOT via Kornia (MIT) |
|
||||||
|
| Edge detection | Not mentioned | DexiNed via Kornia (Apache 2.0) |
|
||||||
|
| Feature extraction | Not mentioned | DINOv2 via timm (Apache 2.0) |
|
||||||
|
| Segmentation | Not mentioned | segmentation_models_pytorch (MIT) |
|
||||||
|
| Super resolution | Not mentioned | RRDB-Net via Kornia (Apache 2.0) |
|
||||||
|
| PBR decomposition (normal, base color, roughness, specular, metallic) | SwitchLight 3.0 | Architecture built on segmentation_models_pytorch + timm backbones (PP-HGNet, ResNet); proprietary trained weights |
|
||||||
|
| Relighting | SwitchLight 3.0 | Proprietary (not fully characterized) |
|
||||||
|
|
||||||
|
The preprocessing pipeline--alpha mattes, depth maps, person
|
||||||
|
detection, face detection, multi-object tracking, edge detection,
|
||||||
|
segmentation, and super resolution--is built entirely from
|
||||||
|
open-source models used off the shelf.
|
||||||
|
|
||||||
|
The PBR decomposition model, marketed as part of SwitchLight 3.0,
|
||||||
|
appears to be architecturally built from the same open-source
|
||||||
|
encoder-decoder frameworks and pretrained backbones available to
|
||||||
|
anyone. No physics-based rendering code (Cook-Torrance, BRDF,
|
||||||
|
spherical harmonics) was found in the application binary, despite the
|
||||||
|
CVPR 2024 paper describing such an architecture. The proprietary
|
||||||
|
element appears to be the trained weights, not the model architecture.
|
||||||
|
|
||||||
|
The name "SwitchLight" does not appear anywhere in the application
|
||||||
|
binary, the setup binary, or the Electron app source code. It is a
|
||||||
|
marketing name with no corresponding software component.
|
||||||
|
|
||||||
|
Beeble does acknowledge the use of open-source models in their
|
||||||
|
[FAQ](https://docs.beeble.ai/help/faq): "When open-source models
|
||||||
|
are included, we choose them carefully." However, the product page
|
||||||
|
attributes all outputs to SwitchLight without distinguishing which
|
||||||
|
passes come from open-source components.
|
||||||
|
|
||||||
|
|
||||||
|
## Why this matters
|
||||||
|
|
||||||
|
Most Beeble Studio users use the application for PBR
|
||||||
|
extractions--alpha mattes, diffuse/albedo, normals and depth
|
||||||
|
maps--not for relighting within the software. The alpha and depth
|
||||||
|
extractions use open-source models directly and can be replicated
|
||||||
|
for free. The PBR extractions use standard open-source architectures
|
||||||
|
with custom-trained weights. Open-source alternatives for PBR
|
||||||
|
decomposition (CHORD, RGB-X) now exist and are narrowing the gap.
|
||||||
|
See the [ComfyUI guide](docs/COMFYUI_GUIDE.md) for details.
|
||||||
|
|
||||||
|
The application bundles approximately 48 Python packages, of which
|
||||||
|
only 6 include license files. All identified open-source components
|
||||||
|
require attribution under their licenses (MIT and Apache 2.0). No
|
||||||
|
attribution was found for core components. See the
|
||||||
|
[license analysis](docs/LICENSE_ANALYSIS.md).
|
||||||
|
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
- **[Full report](docs/REPORT.md)** -- Detailed findings with
|
||||||
|
evidence for each identified component and architecture analysis
|
||||||
|
- **[License analysis](docs/LICENSE_ANALYSIS.md)** -- License
|
||||||
|
requirements and compliance assessment
|
||||||
|
- **[Methodology](docs/METHODOLOGY.md)** -- How the analysis was
|
||||||
|
performed and what was not done
|
||||||
|
- **[ComfyUI guide](docs/COMFYUI_GUIDE.md)** -- How to replicate
|
||||||
|
the pipeline with open-source tools
|
||||||
|
- **[Verification guide](docs/VERIFICATION_GUIDE.md)** -- How to
|
||||||
|
independently verify these findings
|
||||||
|
- **[Marketing claims](evidence/marketing_claims.md)** -- Archived
|
||||||
|
quotes from Beeble's public pages
|
||||||
|
|
||||||
|
|
||||||
|
## Methodology
|
||||||
|
|
||||||
|
The analysis combined several non-invasive techniques: string
|
||||||
|
extraction from process memory, TensorRT plugin identification,
|
||||||
|
PyInstaller module listing, Electron app source inspection, library
|
||||||
|
directory inventory, and manifest analysis. No code was decompiled,
|
||||||
|
no encryption was broken, and no proprietary logic was examined. The
|
||||||
|
full methodology is documented [here](docs/METHODOLOGY.md).
|
||||||
|
|
||||||
|
|
||||||
|
## What this is
|
||||||
|
|
||||||
|
This is a factual technical analysis. The evidence is presented so
|
||||||
|
that VFX professionals can make informed decisions about the tools
|
||||||
|
they use. All claims are verifiable using the methods described in
|
||||||
|
the [verification guide](docs/VERIFICATION_GUIDE.md).
|
||||||
|
|
||||||
|
|
||||||
|
## What this is not
|
||||||
|
|
||||||
|
This is not an accusation of wrongdoing. Using open-source software
|
||||||
|
in commercial products is normal, legal, and encouraged by the
|
||||||
|
open-source community. Using open-source architectures with custom
|
||||||
|
training data is how most ML products are built. The concerns
|
||||||
|
documented here are narrower:
|
||||||
|
|
||||||
|
1. Marketing language that attributes open-source outputs to
|
||||||
|
proprietary technology
|
||||||
|
2. Investor-facing claims about a "foundational model" that appears
|
||||||
|
to be a pipeline of open-source components with proprietary weights
|
||||||
|
3. A CVPR paper describing a physics-based architecture that does
|
||||||
|
not appear to match the deployed application
|
||||||
|
4. Missing license attribution required by MIT and Apache 2.0
|
||||||
|
|
||||||
|
The first and fourth are correctable. The second is a question for
|
||||||
|
investors. The third is a question for the research community.
|
||||||
|
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This repository is licensed under
|
||||||
|
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
|
||||||
381
docs/COMFYUI_GUIDE.md
Normal file
381
docs/COMFYUI_GUIDE.md
Normal file
@ -0,0 +1,381 @@
|
|||||||
|
# Replicating Beeble's Pipeline with Open-Source Tools
|
||||||
|
|
||||||
|
Most Beeble Studio users pay for PBR extractions--alpha mattes, depth
|
||||||
|
maps, normal maps--rather than the relighting. The extraction pipeline
|
||||||
|
is built from open-source models, and the PBR decomposition and
|
||||||
|
relighting stages now have viable open-source alternatives too.
|
||||||
|
|
||||||
|
This guide documents how to replicate each stage using ComfyUI and
|
||||||
|
direct Python. No workflow JSON files are provided yet, but the
|
||||||
|
relevant nodes, models, and tradeoffs are documented below.
|
||||||
|
|
||||||
|
If you are unfamiliar with ComfyUI, see https://docs.comfy.org/
|
||||||
|
|
||||||
|
|
||||||
|
## Pipeline overview
|
||||||
|
|
||||||
|
```
|
||||||
|
Input frame
|
||||||
|
|
|
||||||
|
+--> Background removal --> Alpha matte
|
||||||
|
| (InSPyReNet / BiRefNet)
|
||||||
|
|
|
||||||
|
+--> Depth estimation --> Depth map
|
||||||
|
| (Depth Anything V2)
|
||||||
|
|
|
||||||
|
+--> Normal estimation --> Normal map
|
||||||
|
| (StableNormal / NormalCrafter)
|
||||||
|
|
|
||||||
|
+--> PBR decomposition --> Albedo, Roughness, Metallic
|
||||||
|
| (CHORD / RGB↔X)
|
||||||
|
|
|
||||||
|
+--> Relighting --> Relit output
|
||||||
|
(IC-Light / manual in Blender/Nuke)
|
||||||
|
```
|
||||||
|
|
||||||
|
The first two stages use the exact same models Beeble uses. The
|
||||||
|
remaining stages use different models that produce comparable outputs.
|
||||||
|
|
||||||
|
|
||||||
|
## 1. Background removal (Alpha matte)
|
||||||
|
|
||||||
|
**What Beeble uses**: transparent-background / InSPyReNet (MIT)
|
||||||
|
|
||||||
|
This is the simplest stage. Several ComfyUI nodes wrap the same
|
||||||
|
underlying model:
|
||||||
|
|
||||||
|
- **ComfyUI-InSPyReNet** -- wraps the same `transparent-background`
|
||||||
|
library Beeble uses. Install via ComfyUI Manager.
|
||||||
|
- **ComfyUI-BiRefNet** -- uses BiRefNet, a newer model that often
|
||||||
|
produces sharper edges around hair and fine detail.
|
||||||
|
- **ComfyUI-RMBG** -- BRIA's background removal model, another
|
||||||
|
strong alternative.
|
||||||
|
|
||||||
|
For video, connect an image sequence loader to the removal node and
|
||||||
|
export the alpha channel as a separate pass. These models process
|
||||||
|
per-frame, so there is no temporal consistency--but alpha mattes are
|
||||||
|
typically stable enough that this is not a problem.
|
||||||
|
|
||||||
|
**Direct Python**:
|
||||||
|
```bash
|
||||||
|
pip install transparent-background
|
||||||
|
```
|
||||||
|
```python
|
||||||
|
from transparent_background import Remover
|
||||||
|
remover = Remover()
|
||||||
|
alpha = remover.process(image, type='map')
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## 2. Depth estimation
|
||||||
|
|
||||||
|
**What Beeble uses**: Depth Anything V2 via Kornia (Apache 2.0)
|
||||||
|
|
||||||
|
- **ComfyUI-DepthAnythingV2** -- dedicated nodes for all model sizes.
|
||||||
|
- **comfyui_controlnet_aux** -- includes Depth Anything V2 as a
|
||||||
|
preprocessor option.
|
||||||
|
|
||||||
|
Use the `large` variant for best quality. This is a per-frame model
|
||||||
|
with no temporal information, but monocular depth tends to be stable
|
||||||
|
across frames for most footage.
|
||||||
|
|
||||||
|
**Direct Python**:
|
||||||
|
```bash
|
||||||
|
pip install kornia
|
||||||
|
```
|
||||||
|
```python
|
||||||
|
from kornia.contrib import DepthAnything
|
||||||
|
model = DepthAnything.from_pretrained("depth-anything-v2-large")
|
||||||
|
depth = model(image_tensor)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## 3. Normal estimation
|
||||||
|
|
||||||
|
**What Beeble claims**: The CVPR 2024 paper describes a dedicated
|
||||||
|
"Normal Net" within SwitchLight. However, analysis of the deployed
|
||||||
|
application found no evidence of this specific architecture--the
|
||||||
|
PBR models appear to use standard encoder-decoder segmentation
|
||||||
|
frameworks with pretrained backbones (see [REPORT.md](REPORT.md)
|
||||||
|
section 4 for details).
|
||||||
|
|
||||||
|
Multiple open-source models now produce high-quality surface normals
|
||||||
|
from single images, and one handles video with temporal consistency.
|
||||||
|
|
||||||
|
### For single images
|
||||||
|
|
||||||
|
- **StableNormal** (SIGGRAPH Asia 2024) -- currently best benchmarks
|
||||||
|
for monocular normal estimation. Uses a two-stage coarse-to-fine
|
||||||
|
strategy with DINOv2 semantic features for guidance. A turbo variant
|
||||||
|
runs 10x faster with minimal quality loss.
|
||||||
|
GitHub: https://github.com/Stable-X/StableNormal
|
||||||
|
|
||||||
|
- **DSINE** (CVPR 2024) -- discriminative CNN-based approach. No
|
||||||
|
diffusion overhead, so it is fast. Competitive with StableNormal on
|
||||||
|
NYUv2 benchmarks. Good choice when inference speed matters.
|
||||||
|
GitHub: https://github.com/markkua/DSINE
|
||||||
|
|
||||||
|
- **GeoWizard** (ECCV 2024) -- jointly predicts depth AND normals
|
||||||
|
from a single image, which guarantees geometric consistency between
|
||||||
|
the two. Available in ComfyUI via ComfyUI-Geowizard.
|
||||||
|
GitHub: https://github.com/fuxiao0719/GeoWizard
|
||||||
|
|
||||||
|
### For video (temporally consistent normals)
|
||||||
|
|
||||||
|
- **NormalCrafter** (2025) -- this is the most relevant model for
|
||||||
|
replicating Beeble's video pipeline. It uses video diffusion priors
|
||||||
|
to produce temporally consistent normal maps across frames,
|
||||||
|
directly comparable to SwitchLight 3.0's "true video model" claim.
|
||||||
|
Has ComfyUI nodes via ComfyUI-NormalCrafterWrapper.
|
||||||
|
GitHub: https://github.com/AIWarper/ComfyUI-NormalCrafterWrapper
|
||||||
|
Paper: https://arxiv.org/abs/2504.11427
|
||||||
|
|
||||||
|
Key parameters for ComfyUI:
|
||||||
|
- `window_size`: number of frames processed together (default 14).
|
||||||
|
Larger = better temporal consistency, more VRAM.
|
||||||
|
- `time_step_size`: how far the window slides. Set smaller than
|
||||||
|
window_size for overlapping windows and smoother transitions.
|
||||||
|
|
||||||
|
**Assessment**: For static images, StableNormal likely matches or
|
||||||
|
exceeds Beeble's normal quality, since it is a specialized model
|
||||||
|
rather than one sub-network within a larger system. For video,
|
||||||
|
NormalCrafter addresses the temporal consistency problem that was
|
||||||
|
previously a key differentiator of Beeble's pipeline.
|
||||||
|
|
||||||
|
|
||||||
|
## 4. PBR material decomposition (Albedo, Roughness, Metallic)
|
||||||
|
|
||||||
|
**What Beeble claims**: The CVPR 2024 paper describes a "Specular Net"
|
||||||
|
and analytical albedo derivation using a Cook-Torrance reflectance
|
||||||
|
model. Analysis of the deployed application found no Cook-Torrance,
|
||||||
|
BRDF, or physics-based rendering terminology in the binary. The PBR
|
||||||
|
models appear to use standard segmentation architectures
|
||||||
|
(segmentation_models_pytorch with pretrained backbones) trained on
|
||||||
|
proprietary portrait data. See [REPORT.md](REPORT.md) section 4.
|
||||||
|
|
||||||
|
Regardless of how Beeble implements PBR decomposition, this is the
|
||||||
|
hardest stage to replicate with open-source tools. Beeble's model was
|
||||||
|
trained on portrait and human subject data. The open-source
|
||||||
|
alternatives were trained on different data, which affects quality
|
||||||
|
for human subjects.
|
||||||
|
|
||||||
|
### Available models
|
||||||
|
|
||||||
|
- **CHORD** (Ubisoft La Forge, SIGGRAPH Asia 2025) -- the most
|
||||||
|
complete open-source option. Decomposes a single image into base
|
||||||
|
color, normal, height, roughness, and metalness using chained
|
||||||
|
diffusion. Has official ComfyUI nodes from Ubisoft. Weights on
|
||||||
|
HuggingFace (`Ubisoft/ubisoft-laforge-chord`).
|
||||||
|
GitHub: https://github.com/ubisoft/ComfyUI-Chord
|
||||||
|
**License: Research-only (Ubisoft ML License)**
|
||||||
|
|
||||||
|
Limitation: trained on the MatSynth dataset (~5700 PBR materials),
|
||||||
|
which is texture/material focused. Results on human skin, hair, and
|
||||||
|
clothing will be plausible but not specifically optimized for
|
||||||
|
portrait data. The authors note metalness prediction is notably
|
||||||
|
difficult.
|
||||||
|
|
||||||
|
- **RGB↔X** (Adobe, SIGGRAPH 2024) -- decomposes into albedo,
|
||||||
|
roughness, metallicity, normals, AND estimates lighting. Trained on
|
||||||
|
interior scenes. Fully open-source code and weights.
|
||||||
|
GitHub: https://github.com/zheng95z/rgbx
|
||||||
|
Minimum 12GB VRAM recommended.
|
||||||
|
|
||||||
|
Limitation: trained on interior scene data, not portrait/human
|
||||||
|
data. The albedo estimation for rooms and furniture is strong; for
|
||||||
|
human subjects it is less well-characterized.
|
||||||
|
|
||||||
|
- **PBRify Remix** -- simpler model for generating PBR maps from
|
||||||
|
diffuse textures. Trained on CC0 data from ambientCG, so no license
|
||||||
|
concerns. Designed for game texture upscaling rather than
|
||||||
|
photographic decomposition.
|
||||||
|
GitHub: https://github.com/Kim2091/PBRify_Remix
|
||||||
|
|
||||||
|
### The honest gap
|
||||||
|
|
||||||
|
Beeble's PBR model was trained on portrait and human subject data
|
||||||
|
(likely lightstage captures, based on the CVPR paper). The
|
||||||
|
open-source alternatives were trained on material textures or interior
|
||||||
|
scenes. For portrait work, this means:
|
||||||
|
|
||||||
|
- Skin subsurface scattering properties will be better captured by
|
||||||
|
Beeble's model
|
||||||
|
- Hair specularity and anisotropy are hard for general-purpose models
|
||||||
|
- Clothing material properties (roughness, metallic) should be
|
||||||
|
comparable
|
||||||
|
|
||||||
|
For non-portrait subjects (products, environments, objects), the
|
||||||
|
open-source models may actually perform better since they were trained
|
||||||
|
on more diverse material data.
|
||||||
|
|
||||||
|
If your goal is manual relighting in Blender or Nuke rather than
|
||||||
|
automated AI relighting, "good enough" PBR passes are often
|
||||||
|
sufficient because you have artistic control over the final result.
|
||||||
|
|
||||||
|
### On training data and the "moat"
|
||||||
|
|
||||||
|
The CVPR paper frames lightstage training data as a significant
|
||||||
|
competitive advantage. This deserves scrutiny from VFX professionals.
|
||||||
|
|
||||||
|
For PBR decomposition training, what you actually need is a dataset
|
||||||
|
of images paired with ground-truth PBR maps--albedo, normal,
|
||||||
|
roughness, metallic. Physical lightstage captures are one way to
|
||||||
|
obtain this data, but modern synthetic rendering provides the same
|
||||||
|
thing more cheaply and at greater scale:
|
||||||
|
|
||||||
|
- **Unreal Engine MetaHumans**: photorealistic digital humans with
|
||||||
|
full PBR material definitions. Render them under varied lighting
|
||||||
|
and you have ground-truth PBR for each frame.
|
||||||
|
- **Blender character generators** (Human Generator, MB-Lab):
|
||||||
|
produce characters with known material properties that can be
|
||||||
|
rendered procedurally.
|
||||||
|
- **Houdini procedural pipelines**: can generate hundreds of
|
||||||
|
thousands of unique character/lighting/pose combinations
|
||||||
|
programmatically.
|
||||||
|
|
||||||
|
The ground truth is inherent in synthetic rendering: you created the
|
||||||
|
scene, so you already have the PBR maps. A VFX studio with a
|
||||||
|
standard character pipeline could generate a training dataset in a
|
||||||
|
week.
|
||||||
|
|
||||||
|
With model sizes under 2 GB (based on the encrypted model files in
|
||||||
|
Beeble's distribution) and standard encoder-decoder architectures,
|
||||||
|
the compute cost to train equivalent models from synthetic data is
|
||||||
|
modest--well within reach of independent researchers or small studios.
|
||||||
|
|
||||||
|
This does not mean Beeble's trained weights are worthless. But the
|
||||||
|
barrier to replication is lower than the marketing suggests,
|
||||||
|
especially given that the model architectures are standard
|
||||||
|
open-source frameworks.
|
||||||
|
|
||||||
|
|
||||||
|
## 5. Relighting
|
||||||
|
|
||||||
|
**What Beeble claims**: The CVPR paper describes a "Render Net" for
|
||||||
|
relighting. This is the least well-characterized stage in our
|
||||||
|
analysis--the relighting model's architecture could not be determined
|
||||||
|
from the available evidence.
|
||||||
|
|
||||||
|
### AI-based relighting
|
||||||
|
|
||||||
|
- **IC-Light** (ICLR 2025, by lllyasviel / ControlNet creator) --
|
||||||
|
the leading open-source relighting model. Two modes: text-conditioned
|
||||||
|
(describe the target lighting) and background-conditioned (provide a
|
||||||
|
background image whose lighting should be matched). Based on Stable
|
||||||
|
Diffusion.
|
||||||
|
GitHub: https://github.com/lllyasviel/IC-Light
|
||||||
|
|
||||||
|
IC-Light uses diffusion-based lighting transfer rather than
|
||||||
|
physics-based rendering. The results look different--less physically
|
||||||
|
precise but more flexible in terms of creative lighting scenarios.
|
||||||
|
|
||||||
|
Available in ComfyUI via multiple community node packages.
|
||||||
|
|
||||||
|
### Manual relighting with PBR passes
|
||||||
|
|
||||||
|
If you have normal, albedo, roughness, and metallic maps from steps
|
||||||
|
3-4, you can do relighting directly in any 3D application:
|
||||||
|
|
||||||
|
- **Blender**: Import the passes as textures on a plane, apply a
|
||||||
|
Principled BSDF shader, and light the scene with any HDRI or light
|
||||||
|
setup. This gives you full artistic control.
|
||||||
|
- **Nuke**: Use the PBR passes with Nuke's relighting nodes for
|
||||||
|
compositing-native workflows.
|
||||||
|
- **Unreal Engine**: Import as material textures for real-time PBR
|
||||||
|
rendering.
|
||||||
|
|
||||||
|
This approach is arguably more powerful than SwitchLight for
|
||||||
|
professional VFX work because you have complete control over the
|
||||||
|
lighting. The tradeoff is that it requires manual setup rather than
|
||||||
|
one-click processing.
|
||||||
|
|
||||||
|
|
||||||
|
## 6. Feature extraction and segmentation
|
||||||
|
|
||||||
|
**What Beeble uses**: DINOv2 via timm (feature extraction),
|
||||||
|
segmentation_models_pytorch (segmentation)
|
||||||
|
|
||||||
|
These are intermediate pipeline components in Beeble's architecture.
|
||||||
|
DINOv2 produces feature maps that feed into other models, and the
|
||||||
|
segmentation model likely handles scene parsing or material
|
||||||
|
classification.
|
||||||
|
|
||||||
|
Most users replicating Beeble's outputs will not need these directly.
|
||||||
|
StableNormal already uses DINOv2 features internally, and CHORD
|
||||||
|
handles its own segmentation. If you do need them:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install timm segmentation-models-pytorch
|
||||||
|
```
|
||||||
|
```python
|
||||||
|
import timm
|
||||||
|
model = timm.create_model('vit_large_patch14_dinov2.lvd142m',
|
||||||
|
pretrained=True)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Comparison with Beeble
|
||||||
|
|
||||||
|
| Pipeline stage | Beeble model | Open-source equivalent | Parity |
|
||||||
|
|-------------|-------------|----------------------|--------|
|
||||||
|
| Person detection | RT-DETR (open source) | RT-DETR / YOLOv8 | Identical (same model) |
|
||||||
|
| Face detection | Kornia face detection (open source) | Kornia / RetinaFace | Identical (same model) |
|
||||||
|
| Tracking | BoxMOT (open source) | BoxMOT / ByteTrack | Identical (same model) |
|
||||||
|
| Alpha matte | InSPyReNet (open source) | InSPyReNet / BiRefNet | Identical (same model) |
|
||||||
|
| Depth map | Depth Anything V2 (open source) | Depth Anything V2 | Identical (same model) |
|
||||||
|
| Edge detection | DexiNed (open source) | DexiNed | Identical (same model) |
|
||||||
|
| Normal map | SMP + timm backbone (proprietary weights) | StableNormal / NormalCrafter | Comparable or better |
|
||||||
|
| Base color | SMP + timm backbone (proprietary weights) | CHORD / RGB-X | Weaker for portraits |
|
||||||
|
| Roughness | SMP + timm backbone (proprietary weights) | CHORD / RGB-X | Weaker for portraits |
|
||||||
|
| Metallic | SMP + timm backbone (proprietary weights) | CHORD / RGB-X | Weaker for portraits |
|
||||||
|
| Specular | SMP + timm backbone (proprietary weights) | CHORD / RGB-X | Weaker for portraits |
|
||||||
|
| Super resolution | RRDB-Net (open source) | ESRGAN / Real-ESRGAN | Identical (same model) |
|
||||||
|
| Relighting | Proprietary (not fully characterized) | IC-Light / manual | Different approach |
|
||||||
|
|
||||||
|
The "Beeble model" column reflects what was found in the application
|
||||||
|
binary, not what the CVPR paper describes. See
|
||||||
|
[REPORT.md](REPORT.md) section 4 for the full architecture analysis.
|
||||||
|
|
||||||
|
Where open-source matches or exceeds Beeble: alpha, depth, normals,
|
||||||
|
detection, tracking, edge detection, and super resolution. Every
|
||||||
|
preprocessing stage in Beeble's pipeline uses the same open-source
|
||||||
|
models you can use directly. For video normals, NormalCrafter
|
||||||
|
provides temporal consistency comparable to Beeble's pipeline.
|
||||||
|
|
||||||
|
Where Beeble retains an advantage: PBR material decomposition for
|
||||||
|
human subjects (base color, roughness, metallic, specular). While the
|
||||||
|
architecture appears to use standard open-source frameworks, the
|
||||||
|
model was trained on portrait-specific data. The open-source PBR
|
||||||
|
models were trained on material textures and interior scenes. However,
|
||||||
|
as discussed above, the barrier to creating equivalent training data
|
||||||
|
using synthetic rendering is lower than commonly assumed.
|
||||||
|
|
||||||
|
Where open-source wins on flexibility: manual relighting in
|
||||||
|
Blender/Nuke with the extracted PBR passes gives full artistic control
|
||||||
|
that Beeble's automated pipeline does not offer.
|
||||||
|
|
||||||
|
|
||||||
|
## What this means for Beeble users
|
||||||
|
|
||||||
|
If you primarily use Beeble for alpha mattes and depth maps, you can
|
||||||
|
replicate those results for free using the exact same models.
|
||||||
|
|
||||||
|
If you use Beeble for normal maps, the open-source alternatives are
|
||||||
|
now competitive and in some cases better, with NormalCrafter solving
|
||||||
|
the video temporal consistency problem.
|
||||||
|
|
||||||
|
If you use Beeble for full PBR decomposition of portrait footage and
|
||||||
|
need high-quality material properties, Beeble's model still has an
|
||||||
|
edge due to its portrait-specific training data. But the gap is
|
||||||
|
narrowing as models like CHORD improve.
|
||||||
|
|
||||||
|
If you use Beeble for one-click relighting, IC-Light provides a
|
||||||
|
different but functional alternative, and manual PBR relighting in
|
||||||
|
Blender/Nuke gives you more control.
|
||||||
|
|
||||||
|
The core value proposition of Beeble Studio--beyond the models
|
||||||
|
themselves--is convenience. It packages everything into a single
|
||||||
|
application with a render queue, plugin integrations, and a polished
|
||||||
|
UX. Replicating the pipeline in ComfyUI requires more setup and
|
||||||
|
technical knowledge, but costs nothing and gives you full control
|
||||||
|
over every stage.
|
||||||
254
docs/LICENSE_ANALYSIS.md
Normal file
254
docs/LICENSE_ANALYSIS.md
Normal file
@ -0,0 +1,254 @@
|
|||||||
|
# License Compliance Analysis
|
||||||
|
|
||||||
|
This document examines the license requirements of each open-source
|
||||||
|
component identified in Beeble Studio and assesses whether those
|
||||||
|
requirements appear to be met in the distributed application.
|
||||||
|
|
||||||
|
This is not a legal opinion. It is a factual comparison of what the
|
||||||
|
licenses require and what the application provides.
|
||||||
|
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Nine open-source components with distinct roles were identified in
|
||||||
|
Beeble Studio's pipeline. Each has a permissive license that allows
|
||||||
|
commercial use. However, all licenses require attribution--a notice
|
||||||
|
in the distributed software acknowledging the original authors and
|
||||||
|
reproducing the license text. No such attribution was found for any
|
||||||
|
component.
|
||||||
|
|
||||||
|
| Component | License | Requires Attribution | Attribution Found |
|
||||||
|
|-----------|---------|---------------------|-------------------|
|
||||||
|
| transparent-background (InSPyReNet) | MIT | Yes | No |
|
||||||
|
| segmentation_models_pytorch | MIT | Yes | No |
|
||||||
|
| Depth Anything V2 (via Kornia) | Apache 2.0 | Yes | No |
|
||||||
|
| DINOv2 (via timm) | Apache 2.0 | Yes | No |
|
||||||
|
| PP-HGNet (via timm) | Apache 2.0 | Yes | No |
|
||||||
|
| RT-DETR (via Kornia) | Apache 2.0 | Yes | No |
|
||||||
|
| BoxMOT | MIT | Yes | No |
|
||||||
|
| DexiNed (via Kornia) | Apache 2.0 | Yes | No |
|
||||||
|
| RRDB-Net / ESRGAN (via Kornia) | Apache 2.0 | Yes | No |
|
||||||
|
|
||||||
|
Beyond the core pipeline components, the application bundles
|
||||||
|
approximately 48 Python packages in its `lib/` directory. Of these,
|
||||||
|
only 6 include LICENSE files: cryptography, gdown, MarkupSafe, numpy,
|
||||||
|
openexr, and triton. The remaining 42 packages--including PyTorch
|
||||||
|
(BSD 3-Clause), Kornia (Apache 2.0), Pillow (MIT-CMU), timm (Apache
|
||||||
|
2.0), and many others--are distributed without their license files.
|
||||||
|
While some of these licenses (like BSD) only require attribution when
|
||||||
|
source code is redistributed, others (MIT, Apache 2.0) require
|
||||||
|
attribution in binary distributions as well.
|
||||||
|
|
||||||
|
|
||||||
|
## MIT License requirements
|
||||||
|
|
||||||
|
**Applies to**: transparent-background, segmentation_models_pytorch
|
||||||
|
|
||||||
|
The MIT License is one of the most permissive open-source licenses.
|
||||||
|
It permits commercial use, modification, and redistribution with a
|
||||||
|
single condition:
|
||||||
|
|
||||||
|
> The above copyright notice and this permission notice shall be
|
||||||
|
> included in all copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
This means any application that includes MIT-licensed code must
|
||||||
|
include the original copyright notice and the MIT License text
|
||||||
|
somewhere accessible to end users--typically in an "About" dialog,
|
||||||
|
a LICENSES file, or accompanying documentation.
|
||||||
|
|
||||||
|
**What Beeble provides**: No copyright notice or license text for
|
||||||
|
transparent-background or segmentation_models_pytorch was found in
|
||||||
|
the application's user-facing materials, documentation, or binary
|
||||||
|
strings. The libraries themselves are embedded in the application
|
||||||
|
binary, and their license files do not appear to be distributed
|
||||||
|
alongside the application.
|
||||||
|
|
||||||
|
|
||||||
|
## Apache License 2.0 requirements
|
||||||
|
|
||||||
|
**Applies to**: Kornia (through which Depth Anything V2 is accessed),
|
||||||
|
timm (through which DINOv2 is accessed)
|
||||||
|
|
||||||
|
The Apache 2.0 License is also permissive but has somewhat more
|
||||||
|
specific requirements than MIT. Section 4 ("Redistribution") states:
|
||||||
|
|
||||||
|
> You must give any other recipients of the Work or Derivative Works
|
||||||
|
> a copy of this License; and
|
||||||
|
>
|
||||||
|
> You must cause any modified files to carry prominent notices stating
|
||||||
|
> that You changed the files; and
|
||||||
|
>
|
||||||
|
> You must retain, in the Source form of any Derivative Works that You
|
||||||
|
> distribute, all copyright, patent, trademark, and attribution
|
||||||
|
> notices from the Source form of the Work [...]
|
||||||
|
|
||||||
|
Additionally, if a NOTICE file is included with the original work,
|
||||||
|
its contents must be included in the redistribution.
|
||||||
|
|
||||||
|
**What Beeble provides**: No Apache 2.0 license text, NOTICE file
|
||||||
|
contents, or copyright notices for Kornia, timm, or their associated
|
||||||
|
models were found in the application.
|
||||||
|
|
||||||
|
|
||||||
|
## On encrypting open-source models
|
||||||
|
|
||||||
|
Beeble encrypts its model files (stored as `.enc` files with AES
|
||||||
|
encryption). A reasonable question is whether encrypting open-source
|
||||||
|
models violates their licenses.
|
||||||
|
|
||||||
|
The answer is nuanced. Neither the MIT License nor the Apache 2.0
|
||||||
|
License prohibits encryption of the licensed software. Encryption is
|
||||||
|
a form of packaging, and permissive licenses generally do not restrict
|
||||||
|
how software is packaged or distributed, as long as the license terms
|
||||||
|
are met.
|
||||||
|
|
||||||
|
The issue is not the encryption itself. The issue is that encryption
|
||||||
|
makes it non-obvious to users that they are running open-source
|
||||||
|
software, which compounds the problem of missing attribution. When
|
||||||
|
the models are encrypted and no attribution is provided, users have
|
||||||
|
no way to know that the "proprietary AI" they are paying for includes
|
||||||
|
freely available open-source components.
|
||||||
|
|
||||||
|
|
||||||
|
## Beeble's own statements
|
||||||
|
|
||||||
|
Beeble's FAQ acknowledges the use of open-source models:
|
||||||
|
|
||||||
|
> When open-source models are included, we choose them
|
||||||
|
> carefully--only those with published research papers that disclose
|
||||||
|
> their training data and carry valid commercial-use licenses.
|
||||||
|
|
||||||
|
This statement is accurate in that the identified licenses (MIT,
|
||||||
|
Apache 2.0) do permit commercial use. But having a "valid
|
||||||
|
commercial-use license" is not the same as complying with that
|
||||||
|
license. Both MIT and Apache 2.0 allow commercial use *on the
|
||||||
|
condition that attribution is provided*. The licenses do not
|
||||||
|
grant unconditional commercial use.
|
||||||
|
|
||||||
|
|
||||||
|
## Legal precedent
|
||||||
|
|
||||||
|
The enforceability of open-source license conditions was established
|
||||||
|
in *Jacobsen v. Katzer* (Fed. Cir. 2008). The court held that
|
||||||
|
open-source license terms (including attribution requirements) are
|
||||||
|
enforceable conditions on the copyright license, not merely
|
||||||
|
contractual covenants. This means that failing to comply with
|
||||||
|
attribution requirements is not just a breach of contract--it is
|
||||||
|
copyright infringement.
|
||||||
|
|
||||||
|
This precedent applies to both the MIT and Apache 2.0 licenses used
|
||||||
|
by the components identified in Beeble Studio.
|
||||||
|
|
||||||
|
|
||||||
|
## What compliance would look like
|
||||||
|
|
||||||
|
For reference, meeting the attribution requirements of these licenses
|
||||||
|
would involve:
|
||||||
|
|
||||||
|
1. Including a LICENSES or THIRD_PARTY_NOTICES file with the
|
||||||
|
application that lists each open-source component, its authors,
|
||||||
|
and the full license text
|
||||||
|
|
||||||
|
2. Making this file accessible to users (e.g., through an "About"
|
||||||
|
dialog, a menu item, or documentation)
|
||||||
|
|
||||||
|
3. For Apache 2.0 components, including any NOTICE files provided
|
||||||
|
by the original projects
|
||||||
|
|
||||||
|
This is standard practice in commercial software. Most desktop
|
||||||
|
applications, mobile apps, and web services that use open-source
|
||||||
|
components include such notices.
|
||||||
|
|
||||||
|
|
||||||
|
## Per-component detail
|
||||||
|
|
||||||
|
### transparent-background / InSPyReNet
|
||||||
|
|
||||||
|
- **Repository**: https://github.com/plemeri/transparent-background
|
||||||
|
- **License**: MIT
|
||||||
|
- **Authors**: Taehun Kim et al.
|
||||||
|
- **Paper**: "Revisiting Image Pyramid Structure for High Resolution
|
||||||
|
Salient Object Detection" (ACCV 2022)
|
||||||
|
- **License file**: https://github.com/plemeri/transparent-background/blob/main/LICENSE
|
||||||
|
- **Attribution found in Beeble**: No
|
||||||
|
|
||||||
|
### segmentation_models_pytorch
|
||||||
|
|
||||||
|
- **Repository**: https://github.com/qubvel-org/segmentation_models.pytorch
|
||||||
|
- **License**: MIT
|
||||||
|
- **Author**: Pavel Iakubovskii (qubvel)
|
||||||
|
- **License file**: https://github.com/qubvel-org/segmentation_models.pytorch/blob/main/LICENSE
|
||||||
|
- **Attribution found in Beeble**: No
|
||||||
|
|
||||||
|
### Kornia (access layer for multiple models)
|
||||||
|
|
||||||
|
Kornia serves as the access layer for several models in the pipeline:
|
||||||
|
Depth Anything V2, RT-DETR, face detection, BoxMOT tracking, DexiNed
|
||||||
|
edge detection, RRDB-Net super resolution, and the
|
||||||
|
segmentation_models_pytorch wrapper.
|
||||||
|
|
||||||
|
- **Repository**: https://github.com/kornia/kornia
|
||||||
|
- **License**: Apache 2.0
|
||||||
|
- **Paper (Depth Anything V2)**: Yang et al., "Depth Anything V2"
|
||||||
|
(2024), https://arxiv.org/abs/2406.09414
|
||||||
|
- **License file**: https://github.com/kornia/kornia/blob/main/LICENSE
|
||||||
|
- **Attribution found in Beeble**: No
|
||||||
|
|
||||||
|
### timm / DINOv2 / PP-HGNet
|
||||||
|
|
||||||
|
- **Repository (timm)**: https://github.com/huggingface/pytorch-image-models
|
||||||
|
- **License (timm)**: Apache 2.0
|
||||||
|
- **Author**: Ross Wightman
|
||||||
|
- **Model (DINOv2)**: Oquab et al., Meta AI (2023),
|
||||||
|
https://arxiv.org/abs/2304.07193
|
||||||
|
- **Model (PP-HGNet)**: PaddlePaddle / Baidu,
|
||||||
|
https://github.com/PaddlePaddle/PaddleClas
|
||||||
|
- **License file**: https://github.com/huggingface/pytorch-image-models/blob/main/LICENSE
|
||||||
|
- **Attribution found in Beeble**: No
|
||||||
|
|
||||||
|
DINOv2 and PP-HGNet are both accessed through timm's model registry.
|
||||||
|
DINOv2 serves as a feature extractor; PP-HGNet serves as a backbone
|
||||||
|
encoder for both RT-DETR detection and the PBR decomposition models.
|
||||||
|
Both are covered by timm's Apache 2.0 license. The underlying DINOv2
|
||||||
|
model weights carry their own Apache 2.0 license from Meta AI.
|
||||||
|
|
||||||
|
### RT-DETR (person detection)
|
||||||
|
|
||||||
|
- **Repository**: Accessed via Kornia (`kornia.contrib.models.rt_detr`)
|
||||||
|
- **Original**: https://github.com/lyuwenyu/RT-DETR
|
||||||
|
- **License**: Apache 2.0 (via Kornia)
|
||||||
|
- **Paper**: Zhao et al., "DETRs Beat YOLOs on Real-time Object
|
||||||
|
Detection" (ICLR 2024), https://arxiv.org/abs/2304.08069
|
||||||
|
- **Attribution found in Beeble**: No
|
||||||
|
|
||||||
|
### BoxMOT (multi-object tracking)
|
||||||
|
|
||||||
|
- **Repository**: https://github.com/mikel-brostrom/boxmot
|
||||||
|
- **License**: MIT (some tracker variants are AGPL-3.0)
|
||||||
|
- **Accessed via**: `kornia.models.tracking.boxmot_tracker`
|
||||||
|
- **Attribution found in Beeble**: No
|
||||||
|
|
||||||
|
Note: BoxMOT contains multiple tracking algorithms with different
|
||||||
|
licenses. Some trackers (StrongSORT, BoTSORT) are MIT-licensed,
|
||||||
|
while others may carry AGPL-3.0 restrictions. Without visibility
|
||||||
|
into which tracker variant Beeble uses, the exact license obligation
|
||||||
|
cannot be determined. If an AGPL-3.0 tracker is used, the license
|
||||||
|
requirements would be significantly more restrictive than MIT.
|
||||||
|
|
||||||
|
### DexiNed (edge detection)
|
||||||
|
|
||||||
|
- **Repository**: https://github.com/xavysp/DexiNed
|
||||||
|
- **License**: MIT (original); Apache 2.0 (via Kornia integration)
|
||||||
|
- **Paper**: Soria et al., "Dense Extreme Inception Network for Edge
|
||||||
|
Detection" (2020)
|
||||||
|
- **Accessed via**: `kornia.models.edge_detection.dexined`
|
||||||
|
- **Attribution found in Beeble**: No
|
||||||
|
|
||||||
|
### RRDB-Net / ESRGAN (super resolution)
|
||||||
|
|
||||||
|
- **Repository**: https://github.com/xinntao/ESRGAN
|
||||||
|
- **License**: Apache 2.0 (via Kornia integration)
|
||||||
|
- **Paper**: Wang et al., "ESRGAN: Enhanced Super-Resolution
|
||||||
|
Generative Adversarial Networks" (2018)
|
||||||
|
- **Accessed via**: `kornia.models.super_resolution.rrdbnet`
|
||||||
|
- **Attribution found in Beeble**: No
|
||||||
179
docs/METHODOLOGY.md
Normal file
179
docs/METHODOLOGY.md
Normal file
@ -0,0 +1,179 @@
|
|||||||
|
# Methodology
|
||||||
|
|
||||||
|
This document describes how the analysis was performed and, equally
|
||||||
|
important, what was not done.
|
||||||
|
|
||||||
|
|
||||||
|
## Approach
|
||||||
|
|
||||||
|
The analysis used standard forensic techniques that any security
|
||||||
|
researcher or system administrator would recognize. No proprietary
|
||||||
|
code was reverse-engineered, no encryption was broken, and no
|
||||||
|
software was decompiled.
|
||||||
|
|
||||||
|
Five complementary methods were used, each revealing different
|
||||||
|
aspects of the application's composition.
|
||||||
|
|
||||||
|
|
||||||
|
## What was done
|
||||||
|
|
||||||
|
### 1. String extraction from process memory
|
||||||
|
|
||||||
|
The core technique. When a Linux application runs, its loaded
|
||||||
|
libraries, model metadata, and configuration data are present in
|
||||||
|
process memory as readable strings. The `strings` command and
|
||||||
|
standard text search tools extract these without interacting with the
|
||||||
|
application's logic in any way.
|
||||||
|
|
||||||
|
This is the same technique used in malware analysis, software
|
||||||
|
auditing, and license compliance verification across the industry.
|
||||||
|
It reveals what libraries and models are loaded, but not how they
|
||||||
|
are used or what proprietary code does with them.
|
||||||
|
|
||||||
|
Extracted strings were searched for known identifiers--library names,
|
||||||
|
model checkpoint filenames, Python package docstrings, API
|
||||||
|
signatures, and TensorRT layer names that correspond to published
|
||||||
|
open-source projects. Each match was compared against the source code
|
||||||
|
of the corresponding open-source project to confirm identity.
|
||||||
|
|
||||||
|
### 2. TensorRT plugin analysis
|
||||||
|
|
||||||
|
TensorRT plugins are named components compiled for GPU inference.
|
||||||
|
Their names appear in the binary and reveal which neural network
|
||||||
|
operations are being used. Standard plugins (like convolution or
|
||||||
|
batch normalization) are not informative, but custom plugins with
|
||||||
|
distinctive names--like `DisentangledAttention_TRT` or
|
||||||
|
`RnRes2FullFusion_TRT`--identify specific architectures.
|
||||||
|
|
||||||
|
Plugin names, along with quantization patterns (e.g.,
|
||||||
|
`int8_resnet50_stage_2_fusion`), indicate which backbone
|
||||||
|
architectures have been compiled for production inference and at
|
||||||
|
what precision.
|
||||||
|
|
||||||
|
### 3. PyInstaller module listing
|
||||||
|
|
||||||
|
The `beeble-ai` binary is a PyInstaller-packaged Python application.
|
||||||
|
PyInstaller bundles Python modules into an archive whose table of
|
||||||
|
contents is readable without executing the application. This reveals
|
||||||
|
which Python packages are bundled, including both open-source
|
||||||
|
libraries and obfuscated proprietary modules.
|
||||||
|
|
||||||
|
The module listing identified 7,132 bundled Python modules, including
|
||||||
|
the Pyarmor runtime used to encrypt Beeble's custom code. The
|
||||||
|
obfuscated module structure (three main packages with randomized
|
||||||
|
names, totaling approximately 82 modules) reveals the approximate
|
||||||
|
scope of the proprietary code.
|
||||||
|
|
||||||
|
### 4. Electron app inspection
|
||||||
|
|
||||||
|
Beeble Studio's desktop UI is an Electron application. The compiled
|
||||||
|
JavaScript code in the `dist/` directory is not obfuscated and
|
||||||
|
reveals how the UI orchestrates the Python engine binary. This
|
||||||
|
analysis examined:
|
||||||
|
|
||||||
|
- CLI flag construction (what arguments are passed to the engine)
|
||||||
|
- Database schema (what data is stored about jobs and outputs)
|
||||||
|
- Output directory structure (what files the engine produces)
|
||||||
|
- Progress reporting (what processing stages the engine reports)
|
||||||
|
|
||||||
|
This is the source of evidence about independent processing stages
|
||||||
|
(`--run-alpha`, `--run-depth`, `--run-pbr`), the PBR frame stride
|
||||||
|
parameter, and the output channel structure.
|
||||||
|
|
||||||
|
### 5. Library directory inventory
|
||||||
|
|
||||||
|
The application's `lib/` directory contains approximately 48 Python
|
||||||
|
packages deployed alongside the main binary. These were inventoried
|
||||||
|
to determine which packages are present, their version numbers, and
|
||||||
|
whether license files are included. This is a straightforward
|
||||||
|
directory listing--no files were extracted, modified, or executed.
|
||||||
|
|
||||||
|
The inventory revealed specific library versions (PyTorch 2.8.0,
|
||||||
|
timm 1.0.15, OpenCV 4.11.0.86), confirmed which packages are
|
||||||
|
deployed as separate directories versus compiled into the PyInstaller
|
||||||
|
binary, and identified the license file gap (only 6 of 48 packages
|
||||||
|
include their license files).
|
||||||
|
|
||||||
|
|
||||||
|
### 6. Engine setup log analysis
|
||||||
|
|
||||||
|
The application's setup process produces a detailed log file that
|
||||||
|
records every file downloaded during installation. This log was
|
||||||
|
read to understand the full scope of the deployment: total file
|
||||||
|
count, total download size, and the complete list of downloaded
|
||||||
|
components. The log is generated during normal operation and does
|
||||||
|
not require any special access to read.
|
||||||
|
|
||||||
|
|
||||||
|
### 7. Manifest and public claims review
|
||||||
|
|
||||||
|
The application's `manifest.json` file, downloaded during normal
|
||||||
|
operation, was inspected for model references and metadata. Beeble's
|
||||||
|
website, documentation, FAQ, and research pages were reviewed to
|
||||||
|
understand how the technology is described to users. All public
|
||||||
|
claims were archived with URLs and timestamps.
|
||||||
|
|
||||||
|
|
||||||
|
## What was not done
|
||||||
|
|
||||||
|
This list defines the boundaries of the analysis and establishes
|
||||||
|
that no proprietary technology was compromised.
|
||||||
|
|
||||||
|
- **No decompilation or disassembly.** The `beeble-ai` binary was
|
||||||
|
never decompiled, disassembled, or analyzed at the instruction
|
||||||
|
level. No tools like Ghidra, IDA Pro, or objdump were used to
|
||||||
|
examine executable code.
|
||||||
|
|
||||||
|
- **No encryption was broken.** Beeble encrypts its model files with
|
||||||
|
AES. Those encrypted files were not decrypted, and no attempt was
|
||||||
|
made to recover encryption keys.
|
||||||
|
|
||||||
|
- **No Pyarmor circumvention.** The Pyarmor runtime that encrypts
|
||||||
|
Beeble's custom Python code was not bypassed, attacked, or
|
||||||
|
circumvented. The analysis relied on evidence visible outside the
|
||||||
|
encrypted modules.
|
||||||
|
|
||||||
|
- **No code reverse-engineering.** The analysis did not examine how
|
||||||
|
Beeble's proprietary code works, how models are orchestrated, or
|
||||||
|
how SwitchLight processes its inputs. The only things identified
|
||||||
|
were which third-party components are present and what
|
||||||
|
architectural patterns they suggest.
|
||||||
|
|
||||||
|
- **No network interception.** No man-in-the-middle proxies or
|
||||||
|
traffic analysis tools were used to intercept communications
|
||||||
|
between the application and Beeble's servers.
|
||||||
|
|
||||||
|
- **No license circumvention.** The application was used under a
|
||||||
|
valid license. No copy protection or DRM was circumvented.
|
||||||
|
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
This analysis can identify what components are present and draw
|
||||||
|
reasonable inferences about how they are used, but it cannot see
|
||||||
|
inside the encrypted code or the encrypted model files. Several
|
||||||
|
important limitations follow:
|
||||||
|
|
||||||
|
**Architecture inference is indirect.** The conclusion that PBR
|
||||||
|
models use segmentation_models_pytorch architecture is based on
|
||||||
|
the co-presence of that framework, compatible backbones, and the
|
||||||
|
absence of alternative architectural patterns. It is not based on
|
||||||
|
direct observation of the model graph. Pyarmor encryption prevents
|
||||||
|
reading the code that connects these components.
|
||||||
|
|
||||||
|
**TensorRT engines are opaque.** The compiled model engines inside
|
||||||
|
the `.enc` files do not expose their internal layer structure to
|
||||||
|
string extraction. The TRT plugins and quantization patterns found
|
||||||
|
in the binary come from the TensorRT runtime environment, not from
|
||||||
|
inside the encrypted model files.
|
||||||
|
|
||||||
|
**Single version analyzed.** The analysis was performed on one
|
||||||
|
version of the Linux desktop application (engine version r1.3.0,
|
||||||
|
model version m1.1.1). Other versions and platforms may differ.
|
||||||
|
|
||||||
|
**String extraction is inherently noisy.** Some identified strings
|
||||||
|
may come from transient data, cached web content, or libraries
|
||||||
|
loaded but not actively used in inference. The findings focus on
|
||||||
|
strings that are unambiguous--complete docstrings, model checkpoint
|
||||||
|
URLs, TensorRT plugin registrations, and package-specific identifiers
|
||||||
|
that cannot plausibly appear by accident.
|
||||||
848
docs/REPORT.md
Normal file
848
docs/REPORT.md
Normal file
@ -0,0 +1,848 @@
|
|||||||
|
# Beeble Studio: Technical Analysis
|
||||||
|
|
||||||
|
**Date**: January 2026
|
||||||
|
**Subject**: Beeble Studio desktop application (Linux x86_64 RPM)
|
||||||
|
**Scope**: Identification of third-party components and architectural
|
||||||
|
analysis of the application's AI pipeline
|
||||||
|
|
||||||
|
|
||||||
|
## 1. Introduction
|
||||||
|
|
||||||
|
Beeble Studio is a desktop application for VFX professionals that
|
||||||
|
generates physically-based rendering (PBR) passes from video footage.
|
||||||
|
It produces alpha mattes (background removal), depth maps, normal
|
||||||
|
maps, base color, roughness, specular, and metallic passes, along
|
||||||
|
with AI-driven relighting capabilities.
|
||||||
|
|
||||||
|
Beeble markets its pipeline as being "Powered by SwitchLight 3.0,"
|
||||||
|
their proprietary video-to-PBR model published at CVPR 2024. The
|
||||||
|
application is sold as a subscription product, with plans starting
|
||||||
|
at $42/month.
|
||||||
|
|
||||||
|
This analysis was prompted by observing that several of Beeble
|
||||||
|
Studio's output passes closely resemble the outputs of well-known
|
||||||
|
open-source models. Standard forensic techniques--string extraction from process memory,
|
||||||
|
TensorRT plugin analysis, PyInstaller module listing, Electron app
|
||||||
|
inspection, and manifest analysis--were used to determine which
|
||||||
|
components the application actually contains and how they are
|
||||||
|
organized.
|
||||||
|
|
||||||
|
|
||||||
|
## 2. Findings summary
|
||||||
|
|
||||||
|
The analysis identified four open-source models used directly for
|
||||||
|
user-facing outputs, a complete open-source detection and tracking
|
||||||
|
pipeline used for preprocessing, additional open-source architectural
|
||||||
|
components, and a proprietary model whose architecture raises questions
|
||||||
|
about how "proprietary" should be understood.
|
||||||
|
|
||||||
|
| Pipeline stage | Component | License | Open source |
|
||||||
|
|---------------|-----------|---------|-------------|
|
||||||
|
| Background removal (alpha) | transparent-background / InSPyReNet | MIT | Yes |
|
||||||
|
| Depth estimation | Depth Anything V2 via Kornia | Apache 2.0 | Yes |
|
||||||
|
| Person detection | RT-DETR via Kornia | Apache 2.0 | Yes |
|
||||||
|
| Face detection | Kornia face detection | Apache 2.0 | Yes |
|
||||||
|
| Multi-object tracking | BoxMOT via Kornia | MIT | Yes |
|
||||||
|
| Edge detection | DexiNed via Kornia | Apache 2.0 | Yes |
|
||||||
|
| Feature extraction | DINOv2 via timm | Apache 2.0 | Yes |
|
||||||
|
| Segmentation | segmentation_models_pytorch | MIT | Yes |
|
||||||
|
| Backbone architecture | PP-HGNet via timm | Apache 2.0 | Yes |
|
||||||
|
| Super resolution | RRDB-Net via Kornia | Apache 2.0 | Yes |
|
||||||
|
| PBR decomposition / relighting | SwitchLight 3.0 | Proprietary | See section 4 |
|
||||||
|
|
||||||
|
The preprocessing pipeline--background removal, depth estimation,
|
||||||
|
feature extraction, segmentation--is composed entirely of open-source
|
||||||
|
models used off the shelf.
|
||||||
|
|
||||||
|
The PBR decomposition and relighting stage is marketed as
|
||||||
|
"SwitchLight 3.0." The CVPR 2024 paper describes it as a
|
||||||
|
physics-based inverse rendering system with dedicated sub-networks
|
||||||
|
(Normal Net, Specular Net) and a Cook-Torrance reflectance model.
|
||||||
|
However, the application binary contains no references to any of
|
||||||
|
this physics-based terminology, and the architectural evidence
|
||||||
|
suggests the models are built from standard encoder-decoder
|
||||||
|
segmentation frameworks with pretrained backbones from timm.
|
||||||
|
This is discussed in detail in section 4.
|
||||||
|
|
||||||
|
The reconstructed pipeline architecture:
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Video Frame
|
||||||
|
|
|
||||||
|
+--[RT-DETR + PP-HGNet]-----------> Person Detection
|
||||||
|
| |
|
||||||
|
| +--[BoxMOT]---------------> Tracking (multi-frame)
|
||||||
|
|
|
||||||
|
+--[Face Detection]---------------> Face Regions
|
||||||
|
|
|
||||||
|
+--[InSPyReNet]-------------------> Alpha Matte
|
||||||
|
|
|
||||||
|
+--[Depth Anything V2]------------> Depth Map
|
||||||
|
|
|
||||||
|
+--[DINOv2]-------> Feature Maps
|
||||||
|
| |
|
||||||
|
+--[segmentation_models_pytorch]---> Segmentation
|
||||||
|
|
|
||||||
|
+--[DexiNed]----------------------> Edge Maps
|
||||||
|
|
|
||||||
|
+--[SMP encoder-decoder + PP-HGNet/ResNet backbone]
|
||||||
|
| |
|
||||||
|
| +----> Normal Map
|
||||||
|
| +----> Base Color
|
||||||
|
| +----> Roughness
|
||||||
|
| +----> Specular
|
||||||
|
| +----> Metallic
|
||||||
|
|
|
||||||
|
+--[RRDB-Net]---------------------> Super Resolution
|
||||||
|
|
|
||||||
|
+--[Relighting model]-------------> Relit Output
|
||||||
|
```
|
||||||
|
|
||||||
|
Each stage runs independently. The Electron app passes separate
|
||||||
|
CLI flags (`--run-alpha`, `--run-depth`, `--run-pbr`) to the engine
|
||||||
|
binary, and each flag can be used in isolation. This is not a unified
|
||||||
|
end-to-end model--it is a pipeline of independent models. The
|
||||||
|
detection and tracking stages (RT-DETR, BoxMOT, face detection) serve
|
||||||
|
as preprocessing--locating and tracking subjects across frames before
|
||||||
|
the extraction models run.
|
||||||
|
|
||||||
|
|
||||||
|
## 3. Evidence for each component
|
||||||
|
|
||||||
|
### 3.1 Background removal: transparent-background / InSPyReNet
|
||||||
|
|
||||||
|
The complete API docstring for the `transparent-background` Python
|
||||||
|
package was found verbatim in process memory:
|
||||||
|
|
||||||
|
```
|
||||||
|
Args:
|
||||||
|
img (PIL.Image or np.ndarray): input image
|
||||||
|
type (str): output type option as below.
|
||||||
|
'rgba' will generate RGBA output regarding saliency score
|
||||||
|
as an alpha map.
|
||||||
|
'green' will change the background with green screen.
|
||||||
|
'white' will change the background with white color.
|
||||||
|
'[255, 0, 0]' will change the background with color
|
||||||
|
code [255, 0, 0].
|
||||||
|
'blur' will blur the background.
|
||||||
|
'overlay' will cover the salient object with translucent
|
||||||
|
green color, and highlight the edges.
|
||||||
|
Returns:
|
||||||
|
PIL.Image: output image
|
||||||
|
```
|
||||||
|
|
||||||
|
This is a character-for-character match with the docstring published
|
||||||
|
at https://github.com/plemeri/transparent-background.
|
||||||
|
|
||||||
|
Additionally, TensorRT layer names found in the binary correspond to
|
||||||
|
Res2Net bottleneck blocks (`RnRes2Br1Br2c_TRT`, `RnRes2Br2bBr2c_TRT`,
|
||||||
|
`RnRes2FullFusion_TRT`), which is the backbone architecture used by
|
||||||
|
InSPyReNet. The `transparent_background.backbones.SwinTransformer`
|
||||||
|
module path was also found in the PyInstaller bundle's module list.
|
||||||
|
|
||||||
|
- **Library**: transparent-background (`pip install transparent-background`)
|
||||||
|
- **Model**: InSPyReNet (Kim et al., ACCV 2022)
|
||||||
|
- **License**: MIT
|
||||||
|
- **Paper**: https://arxiv.org/abs/2209.09475
|
||||||
|
- **Repository**: https://github.com/plemeri/transparent-background
|
||||||
|
|
||||||
|
|
||||||
|
### 3.2 Depth estimation: Depth Anything V2
|
||||||
|
|
||||||
|
The complete API documentation for Depth Anything V2's ONNX export
|
||||||
|
interface was found in process memory:
|
||||||
|
|
||||||
|
```
|
||||||
|
Export a DepthAnything model to an ONNX model file.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
model_name: The name of the model to be loaded.
|
||||||
|
Valid model names include:
|
||||||
|
- `depth-anything-v2-small`
|
||||||
|
- `depth-anything-v2-base`
|
||||||
|
- `depth-anything-v2-large`
|
||||||
|
model_type:
|
||||||
|
The type of the model to be loaded.
|
||||||
|
Valid model types include:
|
||||||
|
- `model`
|
||||||
|
- `model_bnb4`
|
||||||
|
- `model_fp16`
|
||||||
|
- `model_int8`
|
||||||
|
```
|
||||||
|
|
||||||
|
This is accessed through Kornia's ONNX builder interface
|
||||||
|
(`kornia.onnx.DepthAnythingONNXBuilder`), with 50+ additional
|
||||||
|
references to Kornia's tutorials and modules throughout the binary.
|
||||||
|
|
||||||
|
- **Library**: Kornia (`pip install kornia`)
|
||||||
|
- **Model**: Depth Anything V2 (Yang et al., 2024)
|
||||||
|
- **License**: Apache 2.0
|
||||||
|
- **Paper**: https://arxiv.org/abs/2406.09414
|
||||||
|
- **Repository**: https://github.com/kornia/kornia
|
||||||
|
|
||||||
|
|
||||||
|
### 3.3 Feature extraction: DINOv2
|
||||||
|
|
||||||
|
Multiple references to DINOv2 were found across the application:
|
||||||
|
|
||||||
|
- Runtime warning: `WARNING:dinov2:xFormers not available` (captured
|
||||||
|
from application output during normal operation)
|
||||||
|
- Model checkpoint URLs: `dinov2_vits14_pretrain.pth`,
|
||||||
|
`dinov2_vitb14_pretrain.pth` (Meta's public model hosting)
|
||||||
|
- timm model registry name: `vit_large_patch14_dinov2.lvd142m`
|
||||||
|
- File path: `/mnt/work/Beeble_Models/lib/timm/models/hrnet.py`
|
||||||
|
(timm library bundled in the application)
|
||||||
|
|
||||||
|
DINOv2 is Meta's self-supervised vision transformer. It does not
|
||||||
|
produce a user-facing output directly--it generates feature maps
|
||||||
|
that feed into downstream models. This is a standard pattern in
|
||||||
|
modern computer vision: use a large pretrained backbone for feature
|
||||||
|
extraction, then train smaller task-specific heads on top.
|
||||||
|
|
||||||
|
- **Library**: timm (`pip install timm`)
|
||||||
|
- **Model**: DINOv2 (Oquab et al., Meta AI, 2023)
|
||||||
|
- **License**: Apache 2.0
|
||||||
|
- **Paper**: https://arxiv.org/abs/2304.07193
|
||||||
|
- **Repository**: https://github.com/huggingface/pytorch-image-models
|
||||||
|
|
||||||
|
|
||||||
|
### 3.4 Segmentation: segmentation_models_pytorch
|
||||||
|
|
||||||
|
A direct reference to the library's GitHub repository, encoder/decoder
|
||||||
|
architecture parameters, and decoder documentation was found in
|
||||||
|
process memory:
|
||||||
|
|
||||||
|
```
|
||||||
|
encoder_name: Name of the encoder to use.
|
||||||
|
encoder_depth: Depth of the encoder.
|
||||||
|
decoder_channels: Number of channels in the decoder.
|
||||||
|
decoder_name: What decoder to use.
|
||||||
|
|
||||||
|
https://github.com/qubvel-org/segmentation_models.pytorch/
|
||||||
|
tree/main/segmentation_models_pytorch/decoders
|
||||||
|
|
||||||
|
Note:
|
||||||
|
Only encoder weights are available.
|
||||||
|
Pretrained weights for the whole model are not available.
|
||||||
|
```
|
||||||
|
|
||||||
|
This library is a framework for building encoder-decoder
|
||||||
|
segmentation models. It is not a model itself--it provides the
|
||||||
|
architecture (UNet, FPN, DeepLabV3, etc.) into which you plug a
|
||||||
|
pretrained encoder backbone (ResNet, EfficientNet, etc.) and train
|
||||||
|
the decoder on your own data for your specific task.
|
||||||
|
|
||||||
|
Its presence alongside the pretrained backbones described below
|
||||||
|
suggests it serves as the architectural foundation for one or more
|
||||||
|
of the PBR output models. This is discussed further in section 4.
|
||||||
|
|
||||||
|
- **Library**: segmentation_models_pytorch
|
||||||
|
(`pip install segmentation-models-pytorch`)
|
||||||
|
- **License**: MIT
|
||||||
|
- **Repository**: https://github.com/qubvel-org/segmentation_models.pytorch
|
||||||
|
|
||||||
|
|
||||||
|
### 3.5 Backbone: PP-HGNet
|
||||||
|
|
||||||
|
The `HighPerfGpuNet` class was found in process memory along with its
|
||||||
|
full structure:
|
||||||
|
|
||||||
|
```
|
||||||
|
HighPerfGpuNet
|
||||||
|
HighPerfGpuNet.forward_features
|
||||||
|
HighPerfGpuNet.reset_classifier
|
||||||
|
HighPerfGpuBlock.__init__
|
||||||
|
LearnableAffineBlock
|
||||||
|
ConvBNAct.__init__
|
||||||
|
StemV1.__init__
|
||||||
|
StemV1.forward
|
||||||
|
_create_hgnetr
|
||||||
|
```
|
||||||
|
|
||||||
|
This is PP-HGNet (PaddlePaddle High Performance GPU Network), ported
|
||||||
|
to timm's model registry. Documentation strings confirm the identity:
|
||||||
|
|
||||||
|
```
|
||||||
|
PP-HGNet (V1 & V2)
|
||||||
|
PP-HGNetv2: https://github.com/PaddlePaddle/PaddleClas/
|
||||||
|
.../pp_hgnet_v2.py
|
||||||
|
```
|
||||||
|
|
||||||
|
PP-HGNet is a convolutional backbone architecture designed for fast
|
||||||
|
GPU inference, originally developed for Baidu's RT-DETR real-time
|
||||||
|
object detection system. It is available as a pretrained backbone
|
||||||
|
through timm and is commonly used as an encoder in larger models.
|
||||||
|
|
||||||
|
PP-HGNet serves a dual role in the Beeble pipeline. First, it
|
||||||
|
functions as the backbone encoder for the RT-DETR person detection
|
||||||
|
model (see section 3.6). Second, based on the co-presence of
|
||||||
|
`segmentation_models_pytorch` and compatible encoder interfaces, it
|
||||||
|
likely serves as one of the backbone encoders for the PBR
|
||||||
|
decomposition models. This dual use is standard--the same pretrained
|
||||||
|
backbone can be loaded into different model architectures for different
|
||||||
|
tasks.
|
||||||
|
|
||||||
|
- **Library**: timm (`pip install timm`)
|
||||||
|
- **Model**: PP-HGNet (Baidu/PaddlePaddle)
|
||||||
|
- **License**: Apache 2.0
|
||||||
|
- **Repository**: https://github.com/huggingface/pytorch-image-models
|
||||||
|
|
||||||
|
|
||||||
|
### 3.6 Detection and tracking pipeline
|
||||||
|
|
||||||
|
The binary contains a complete person detection and tracking pipeline
|
||||||
|
built from open-source models accessed through Kornia.
|
||||||
|
|
||||||
|
**RT-DETR (Real-Time Detection Transformer).** Full module paths for
|
||||||
|
RT-DETR were found in the binary:
|
||||||
|
|
||||||
|
```
|
||||||
|
kornia.contrib.models.rt_detr.architecture.hgnetv2
|
||||||
|
kornia.contrib.models.rt_detr.architecture.resnet_d
|
||||||
|
kornia.contrib.models.rt_detr.architecture.rtdetr_head
|
||||||
|
kornia.contrib.models.rt_detr.architecture.hybrid_encoder
|
||||||
|
kornia.models.detection.rtdetr
|
||||||
|
```
|
||||||
|
|
||||||
|
RT-DETR model configuration strings confirm the PP-HGNet connection:
|
||||||
|
|
||||||
|
```
|
||||||
|
Configuration to construct RT-DETR model.
|
||||||
|
- HGNetV2-L: 'hgnetv2_l' or RTDETRModelType.hgnetv2_l
|
||||||
|
- HGNetV2-X: 'hgnetv2_x' or RTDETRModelType.hgnetv2_x
|
||||||
|
```
|
||||||
|
|
||||||
|
RT-DETR is Baidu's real-time object detection model, published at
|
||||||
|
ICLR 2024. It detects and localizes objects (including persons) in
|
||||||
|
images. In Beeble's pipeline, it likely serves as the initial stage
|
||||||
|
that identifies which regions of the frame contain subjects to process.
|
||||||
|
|
||||||
|
- **Model**: RT-DETR (Zhao et al., 2024)
|
||||||
|
- **License**: Apache 2.0 (via Kornia)
|
||||||
|
- **Paper**: https://arxiv.org/abs/2304.08069
|
||||||
|
|
||||||
|
**Face detection.** The `kornia.contrib.face_detection` module and
|
||||||
|
`kornia.contrib.FaceDetectorResult` class were found in the binary.
|
||||||
|
This provides face region detection, likely used to guide the PBR
|
||||||
|
models in handling facial features (skin, eyes, hair) differently
|
||||||
|
from other body parts or clothing.
|
||||||
|
|
||||||
|
**BoxMOT (multi-object tracking).** The module path
|
||||||
|
`kornia.models.tracking.boxmot_tracker` was found in the binary.
|
||||||
|
BoxMOT is a multi-object tracking library that maintains identity
|
||||||
|
across video frames--given detections from RT-DETR on each frame,
|
||||||
|
BoxMOT tracks which detection corresponds to which person over time.
|
||||||
|
|
||||||
|
- **Repository**: https://github.com/mikel-brostrom/boxmot
|
||||||
|
- **License**: MIT (AGPL-3.0 for some trackers, MIT for others)
|
||||||
|
|
||||||
|
The presence of a full detection-tracking pipeline is notable because
|
||||||
|
it means the video processing is not a single model operating on raw
|
||||||
|
frames. The pipeline first detects and tracks persons, then runs
|
||||||
|
the extraction models on the detected regions. This is a standard
|
||||||
|
computer vision approach, and every component in this preprocessing
|
||||||
|
chain is open-source.
|
||||||
|
|
||||||
|
|
||||||
|
### 3.7 Edge detection and super resolution
|
||||||
|
|
||||||
|
Two additional open-source models were found:
|
||||||
|
|
||||||
|
**DexiNed (edge detection).** The module path
|
||||||
|
`kornia.models.edge_detection.dexined` was found in the binary.
|
||||||
|
DexiNed (Dense Extreme Inception Network for Edge Detection) is
|
||||||
|
a CNN-based edge detector. It likely produces edge maps used as
|
||||||
|
auxiliary input or guidance for other models in the pipeline.
|
||||||
|
|
||||||
|
- **Model**: DexiNed (Soria et al., 2020)
|
||||||
|
- **License**: Apache 2.0 (via Kornia)
|
||||||
|
|
||||||
|
**RRDB-Net (super resolution).** The module path
|
||||||
|
`kornia.models.super_resolution.rrdbnet` was found in the binary.
|
||||||
|
RRDB-Net (Residual-in-Residual Dense Block Network) is the backbone
|
||||||
|
of ESRGAN, the widely-used super resolution model. This is likely
|
||||||
|
used to upscale PBR passes to the output resolution.
|
||||||
|
|
||||||
|
- **Model**: RRDB-Net / ESRGAN (Wang et al., 2018)
|
||||||
|
- **License**: Apache 2.0 (via Kornia)
|
||||||
|
|
||||||
|
|
||||||
|
### 3.8 TensorRT plugins and quantized backbones
|
||||||
|
|
||||||
|
Several custom TensorRT plugins were found compiled for inference:
|
||||||
|
|
||||||
|
- `DisentangledAttention_TRT` -- a custom TRT plugin implementing
|
||||||
|
DeBERTa-style disentangled attention (He et al., Microsoft, 2021).
|
||||||
|
The `_TRT` suffix indicates this is compiled for production
|
||||||
|
inference, not just a bundled library. This suggests a transformer
|
||||||
|
component in the pipeline that uses disentangled attention to
|
||||||
|
process both content and position information separately.
|
||||||
|
|
||||||
|
- `GridAnchorRect_TRT` -- anchor generation for object detection.
|
||||||
|
Combined with the RT-DETR and face detection references, this
|
||||||
|
confirms that the pipeline includes a detection stage.
|
||||||
|
|
||||||
|
Multiple backbone architectures were found with TensorRT INT8
|
||||||
|
quantization and stage-level fusion optimizations:
|
||||||
|
|
||||||
|
```
|
||||||
|
int8_resnet50_stage_1_4_fusion
|
||||||
|
int8_resnet50_stage_2_fusion
|
||||||
|
int8_resnet50_stage_3_fusion
|
||||||
|
int8_resnet34_stage_1_4_fusion
|
||||||
|
int8_resnet34_stage_2_fusion
|
||||||
|
int8_resnet34_stage_3_fusion
|
||||||
|
int8_resnext101_backbone_fusion
|
||||||
|
```
|
||||||
|
|
||||||
|
This shows that ResNet-34, ResNet-50, and ResNeXt-101 are compiled
|
||||||
|
for inference at INT8 precision with stage-level fusion optimizations.
|
||||||
|
These are standard pretrained backbones available from torchvision and
|
||||||
|
timm.
|
||||||
|
|
||||||
|
|
||||||
|
### 3.9 Additional libraries
|
||||||
|
|
||||||
|
The binary contains references to supporting libraries that are
|
||||||
|
standard in ML applications:
|
||||||
|
|
||||||
|
| Library | License | Role |
|
||||||
|
|---------|---------|------|
|
||||||
|
| PyTorch 2.8.0+cu128 | BSD 3-Clause | Core ML framework |
|
||||||
|
| TensorRT 10 | NVIDIA proprietary | Model compilation and inference |
|
||||||
|
| OpenCV 4.11.0.86 (with Qt5, FFmpeg) | Apache 2.0 | Image processing |
|
||||||
|
| timm 1.0.15 | Apache 2.0 | Model registry and backbones |
|
||||||
|
| Albumentations | MIT | Image augmentation |
|
||||||
|
| Pillow | MIT-CMU | Image I/O |
|
||||||
|
| HuggingFace Hub | Apache 2.0 | Model downloading |
|
||||||
|
| gdown | MIT | Google Drive file downloading |
|
||||||
|
| NumPy, SciPy | BSD | Numerical computation |
|
||||||
|
| Hydra / OmegaConf | MIT | ML configuration management |
|
||||||
|
| einops | MIT | Tensor manipulation |
|
||||||
|
| safetensors | Apache 2.0 | Model weight format |
|
||||||
|
| Flet | Apache 2.0 | Cross-platform GUI framework |
|
||||||
|
| SoftHSM2 / PKCS#11 | BSD 2-Clause | License token validation |
|
||||||
|
| OpenSSL 1.1 | Apache 2.0 | Cryptographic operations |
|
||||||
|
|
||||||
|
Two entries deserve mention. **Pyarmor** (runtime ID
|
||||||
|
`pyarmor_runtime_007423`) is used to encrypt all of Beeble's custom
|
||||||
|
Python code--every proprietary module is obfuscated with randomized
|
||||||
|
names and encrypted bytecode. This prevents static analysis of how
|
||||||
|
models are orchestrated. **Flet** is the GUI framework powering the
|
||||||
|
Python-side interface.
|
||||||
|
|
||||||
|
|
||||||
|
## 4. Architecture analysis
|
||||||
|
|
||||||
|
This section presents evidence about how the PBR decomposition model
|
||||||
|
is constructed. The findings here are more inferential than those in
|
||||||
|
section 3--they are based on the absence of expected evidence and the
|
||||||
|
presence of architectural patterns, rather than on verbatim string
|
||||||
|
matches. The distinction matters, and we draw it clearly.
|
||||||
|
|
||||||
|
### 4.1 What the CVPR 2024 paper describes
|
||||||
|
|
||||||
|
The SwitchLight CVPR 2024 paper describes a physics-based inverse
|
||||||
|
rendering architecture with several dedicated components:
|
||||||
|
|
||||||
|
- A **Normal Net** that estimates surface normals
|
||||||
|
- A **Specular Net** that predicts specular reflectance properties
|
||||||
|
- Analytical **albedo derivation** using a Cook-Torrance BRDF model
|
||||||
|
- A **Render Net** that performs the final relighting
|
||||||
|
- Spherical harmonics for environment lighting representation
|
||||||
|
|
||||||
|
This is presented as a unified system where intrinsic decomposition
|
||||||
|
(breaking an image into its physical components) is an intermediate
|
||||||
|
step in the relighting pipeline. The paper's novelty claim rests
|
||||||
|
partly on this physics-driven architecture.
|
||||||
|
|
||||||
|
### 4.2 What the binary contains
|
||||||
|
|
||||||
|
A thorough string search of the 2GB process memory dump and the 56MB
|
||||||
|
application binary found **zero** matches for the following terms:
|
||||||
|
|
||||||
|
- `cook_torrance`, `cook-torrance`, `Cook_Torrance`, `CookTorrance`
|
||||||
|
- `brdf`, `BRDF`
|
||||||
|
- `albedo`
|
||||||
|
- `specular_net`, `normal_net`, `render_net`
|
||||||
|
- `lightstage`, `light_stage`, `OLAT`
|
||||||
|
- `environment_map`, `env_map`, `spherical_harmonic`, `SH_coeff`
|
||||||
|
- `inverse_rendering`, `intrinsic_decomposition`
|
||||||
|
- `relight` (as a function or class name)
|
||||||
|
- `switchlight`, `SwitchLight` (in any capitalization)
|
||||||
|
|
||||||
|
Not one of these terms appears anywhere in the application.
|
||||||
|
|
||||||
|
The absence of "SwitchLight" deserves emphasis. This term was searched
|
||||||
|
across three independent codebases:
|
||||||
|
|
||||||
|
1. The `beeble-ai` engine binary (56 MB) -- zero matches
|
||||||
|
2. The `beeble-engine-setup` binary (13 MB) -- zero matches
|
||||||
|
3. All 667 JavaScript files in the Electron app's `dist/` directory --
|
||||||
|
zero matches
|
||||||
|
|
||||||
|
"SwitchLight" is purely a marketing name. It does not appear as a
|
||||||
|
model name, a class name, a configuration key, a log message, or a
|
||||||
|
comment anywhere in the application. By contrast, open-source
|
||||||
|
component names appear throughout the binary because they are real
|
||||||
|
software identifiers used by real code. "SwitchLight" is not used by
|
||||||
|
any code at all.
|
||||||
|
|
||||||
|
This is a significant absence. When an application uses a library or
|
||||||
|
implements an algorithm, its terminology appears in memory through
|
||||||
|
function names, variable names, error messages, logging, docstrings,
|
||||||
|
or class definitions. The open-source components (InSPyReNet, Depth
|
||||||
|
Anything, DINOv2, RT-DETR, BoxMOT) are all identifiable precisely
|
||||||
|
because their terminology is present. The physics-based rendering
|
||||||
|
vocabulary described in the CVPR paper is entirely absent.
|
||||||
|
|
||||||
|
There is a caveat: Beeble encrypts its custom Python code with
|
||||||
|
Pyarmor, which encrypts bytecode and obfuscates module names. If the
|
||||||
|
Cook-Torrance logic exists only in Pyarmor-encrypted modules, its
|
||||||
|
terminology would not be visible to string extraction. However,
|
||||||
|
TensorRT layer names, model checkpoint references, and library-level
|
||||||
|
strings survive Pyarmor encryption--and none of those contain
|
||||||
|
physics-based rendering terminology either.
|
||||||
|
|
||||||
|
### 4.3 What the binary contains instead
|
||||||
|
|
||||||
|
Where you would expect physics-based rendering components, the
|
||||||
|
binary shows standard machine learning infrastructure:
|
||||||
|
|
||||||
|
- **segmentation_models_pytorch** -- an encoder-decoder segmentation
|
||||||
|
framework designed for dense pixel prediction tasks. It provides
|
||||||
|
architectures (UNet, FPN, DeepLabV3) that take pretrained encoder
|
||||||
|
backbones and learn to predict pixel-level outputs.
|
||||||
|
|
||||||
|
- **PP-HGNet, ResNet-34, ResNet-50, ResNeXt-101** -- standard
|
||||||
|
pretrained backbone architectures, all available from timm. These
|
||||||
|
are the encoders that plug into segmentation_models_pytorch.
|
||||||
|
|
||||||
|
- **DINOv2** -- a self-supervised feature extractor that provides
|
||||||
|
rich visual features as input to downstream models.
|
||||||
|
|
||||||
|
- **DisentangledAttention** -- a transformer attention mechanism,
|
||||||
|
compiled as a custom TRT plugin for inference.
|
||||||
|
|
||||||
|
This is the standard toolkit for building dense prediction models
|
||||||
|
in computer vision. You pick an encoder backbone, connect it to a
|
||||||
|
segmentation decoder, and train the resulting model to predict
|
||||||
|
whatever pixel-level output you need--whether that is semantic labels,
|
||||||
|
depth values, or normal vectors.
|
||||||
|
|
||||||
|
### 4.4 What the Electron app reveals
|
||||||
|
|
||||||
|
The application's Electron shell (the UI layer that orchestrates the
|
||||||
|
Python engine) is not encrypted and provides clear evidence about
|
||||||
|
the pipeline structure.
|
||||||
|
|
||||||
|
The engine binary receives independent processing flags:
|
||||||
|
|
||||||
|
- `--run-alpha` -- generates alpha mattes
|
||||||
|
- `--run-depth` -- generates depth maps
|
||||||
|
- `--run-pbr` -- generates BaseColor, Normal, Roughness, Specular,
|
||||||
|
Metallic
|
||||||
|
|
||||||
|
Each flag can be used in isolation. A user can request alpha without
|
||||||
|
depth, or depth without PBR. The Electron app constructs these flags
|
||||||
|
independently based on user selections.
|
||||||
|
|
||||||
|
A session-start log entry captured in process memory confirms this
|
||||||
|
separation:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"extra_command": "--run-pbr --run-alpha --run-depth
|
||||||
|
--save-exr --pbr-stride 1,2 --fps 24.0
|
||||||
|
--engine-version r1.3.0-m1.1.1"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `--pbr-stride 1,2` flag is notable. It indicates that PBR passes
|
||||||
|
are not processed on every frame--they use a stride, processing a
|
||||||
|
subset of frames and presumably interpolating the rest. This
|
||||||
|
contradicts the "true end-to-end video model that understands motion
|
||||||
|
natively" claim on Beeble's research page. A model that truly
|
||||||
|
processes video end-to-end would not need to skip frames.
|
||||||
|
|
||||||
|
### 4.5 What this suggests
|
||||||
|
|
||||||
|
The evidence points to a specific conclusion: the PBR decomposition
|
||||||
|
model is most likely a standard encoder-decoder segmentation model
|
||||||
|
(segmentation_models_pytorch architecture) with pretrained backbones
|
||||||
|
(PP-HGNet, ResNet, DINOv2), trained on Beeble's private dataset to
|
||||||
|
predict PBR channels as its output.
|
||||||
|
|
||||||
|
This is a common and well-understood approach in computer vision.
|
||||||
|
You take a pretrained backbone, attach a decoder, and train the whole
|
||||||
|
model on your task-specific data using task-specific losses. The
|
||||||
|
Cook-Torrance reflectance model described in the CVPR paper would
|
||||||
|
then be a *training-time loss function*--used to compute the error
|
||||||
|
between predicted and ground-truth renders during training--rather
|
||||||
|
than an architectural component that exists at inference time.
|
||||||
|
|
||||||
|
This distinction matters because it changes what "Powered by
|
||||||
|
SwitchLight 3.0" actually means. The CVPR paper's framing suggests a
|
||||||
|
novel physics-driven architecture. The binary evidence suggests
|
||||||
|
standard open-source architectures trained with proprietary data. The
|
||||||
|
genuine proprietary elements are the training methodology, the
|
||||||
|
lightstage training data, and the trained weights--not the model
|
||||||
|
architecture itself.
|
||||||
|
|
||||||
|
We want to be clear about the limits of this inference. The Pyarmor
|
||||||
|
encryption prevents us from seeing the actual pipeline code, and the
|
||||||
|
TensorRT engines inside the encrypted `.enc` model files do not
|
||||||
|
expose their internal layer structure through string extraction. It is
|
||||||
|
possible, though we think unlikely, that the physics-based rendering
|
||||||
|
code exists entirely within the encrypted layers and uses no standard
|
||||||
|
terminology. We present this analysis as our best reading of the
|
||||||
|
available evidence, not as a certainty.
|
||||||
|
|
||||||
|
|
||||||
|
## 5. Code protection
|
||||||
|
|
||||||
|
Beeble uses two layers of protection to obscure its pipeline:
|
||||||
|
|
||||||
|
**Model encryption.** The six model files are stored as `.enc`
|
||||||
|
files encrypted with AES. They total 4.4 GB:
|
||||||
|
|
||||||
|
| File | Size |
|
||||||
|
|------|------|
|
||||||
|
| 97b0085560.enc | 1,877 MB |
|
||||||
|
| b001322340.enc | 1,877 MB |
|
||||||
|
| 6edccd5753.enc | 351 MB |
|
||||||
|
| e710b0c669.enc | 135 MB |
|
||||||
|
| 0d407dcf32.enc | 111 MB |
|
||||||
|
| 7f121ea5bc.enc | 49 MB |
|
||||||
|
|
||||||
|
The filenames are derived from their SHA-256 hashes. No metadata
|
||||||
|
in the manifest indicates what each model does. However, comparing
|
||||||
|
file sizes against known open-source model checkpoints is suggestive:
|
||||||
|
|
||||||
|
- The 351 MB file closely matches the size of a DINOv2 ViT-B
|
||||||
|
checkpoint (~346 MB for `dinov2_vitb14_pretrain.pth`)
|
||||||
|
- The two ~1,877 MB files are nearly identical in size (within 1 MB
|
||||||
|
of each other), suggesting two variants of the same model compiled
|
||||||
|
to TensorRT engines--possibly different precision levels or input
|
||||||
|
resolution configurations
|
||||||
|
- The smaller files (49 MB, 111 MB, 135 MB) are consistent with
|
||||||
|
single-task encoder-decoder models compiled to TensorRT with INT8
|
||||||
|
quantization
|
||||||
|
|
||||||
|
**Code obfuscation.** All custom Python code is encrypted with
|
||||||
|
Pyarmor. Module names are randomized (`q47ne3pa`, `qf1hf17m`,
|
||||||
|
`vk3zuv58`) and bytecode is decrypted only at runtime. The
|
||||||
|
application contains approximately 82 obfuscated modules across
|
||||||
|
three main packages, with the largest single module being 108 KB.
|
||||||
|
|
||||||
|
This level of protection is unusual for a desktop application in
|
||||||
|
the VFX space, and it is worth understanding what it does and does
|
||||||
|
not hide. Pyarmor prevents reading the pipeline orchestration
|
||||||
|
code--how models are loaded, connected, and run. But it does not
|
||||||
|
hide which libraries are loaded into memory, which TensorRT plugins
|
||||||
|
are compiled, or what command-line interface the engine exposes.
|
||||||
|
Those are the evidence sources this analysis relies on.
|
||||||
|
|
||||||
|
|
||||||
|
## 6. Beeble's public claims
|
||||||
|
|
||||||
|
Beeble's marketing consistently attributes the entire Video-to-VFX
|
||||||
|
pipeline to SwitchLight. The following are exact quotes from their
|
||||||
|
public pages (see [evidence/marketing_claims.md](../evidence/marketing_claims.md)
|
||||||
|
for the complete archive).
|
||||||
|
|
||||||
|
**Beeble Studio product page** (beeble.ai/beeble-studio):
|
||||||
|
|
||||||
|
> Powered by **SwitchLight 3.0**, convert images and videos into
|
||||||
|
> **full PBR passes with alpha and depth maps** for seamless
|
||||||
|
> relighting, background removal, and advanced compositing.
|
||||||
|
|
||||||
|
**SwitchLight 3.0 research page** (beeble.ai/research/switchlight-3-0-is-here):
|
||||||
|
|
||||||
|
> SwitchLight 3.0 is the best Video-to-PBR model in the world.
|
||||||
|
|
||||||
|
> SwitchLight 3.0 is a **true end-to-end video model** that
|
||||||
|
> understands motion natively.
|
||||||
|
|
||||||
|
**Documentation FAQ** (docs.beeble.ai/help/faq):
|
||||||
|
|
||||||
|
On the "What is Video-to-VFX?" question:
|
||||||
|
|
||||||
|
> **Video-to-VFX** uses our foundation model, **SwitchLight 3.0**,
|
||||||
|
> and SOTA AI models to convert your footage into VFX-ready assets.
|
||||||
|
|
||||||
|
On the "Is Beeble's AI trained responsibly?" question:
|
||||||
|
|
||||||
|
> When open-source models are included, we choose them
|
||||||
|
> carefully--only those with published research papers that disclose
|
||||||
|
> their training data and carry valid commercial-use licenses.
|
||||||
|
|
||||||
|
The FAQ is the only public place where Beeble acknowledges the use
|
||||||
|
of open-source models. The product page and research page present
|
||||||
|
the entire pipeline as "Powered by SwitchLight 3.0" without
|
||||||
|
distinguishing which output passes come from SwitchLight versus
|
||||||
|
third-party open-source models.
|
||||||
|
|
||||||
|
### Investor-facing claims
|
||||||
|
|
||||||
|
Beeble raised a $4.75M seed round in July 2024 at a reported $25M
|
||||||
|
valuation, led by Basis Set Ventures and Fika Ventures. At the time,
|
||||||
|
the company had approximately 7 employees. Press coverage of the
|
||||||
|
funding consistently uses language like "foundational model" and
|
||||||
|
"world-class foundational model in lighting" to describe
|
||||||
|
SwitchLight--language that implies a novel, proprietary system rather
|
||||||
|
than a pipeline of open-source components with proprietary weights.
|
||||||
|
|
||||||
|
These investor-facing claims were made through public press releases
|
||||||
|
and coverage, not private communications. They are relevant because
|
||||||
|
they represent how Beeble chose to characterize its technology to
|
||||||
|
the market. See [evidence/marketing_claims.md](../evidence/marketing_claims.md)
|
||||||
|
for archived quotes.
|
||||||
|
|
||||||
|
The "true end-to-end video model" claim is particularly difficult
|
||||||
|
to reconcile with the evidence. The application processes alpha,
|
||||||
|
depth, and PBR as independent stages using separate CLI flags.
|
||||||
|
PBR processing uses a frame stride (`--pbr-stride 1,2`), skipping
|
||||||
|
frames rather than processing video natively. This is a pipeline
|
||||||
|
of separate models, not an end-to-end video model.
|
||||||
|
|
||||||
|
|
||||||
|
## 7. What Beeble does well
|
||||||
|
|
||||||
|
This analysis would be incomplete without acknowledging what is
|
||||||
|
genuinely Beeble's own work.
|
||||||
|
|
||||||
|
**SwitchLight is published research.** The CVPR 2024 paper describes
|
||||||
|
a real methodology for training intrinsic decomposition models using
|
||||||
|
lightstage data and physics-based losses. Whether the deployed
|
||||||
|
architecture matches the paper's description is a separate question
|
||||||
|
from whether the research itself has merit. It does.
|
||||||
|
|
||||||
|
**The trained weights are real work.** If the PBR model is built on
|
||||||
|
standard architectures (as the evidence suggests), the value lies in
|
||||||
|
the training data and training process. Acquiring lightstage data,
|
||||||
|
designing loss functions, and iterating on model quality is
|
||||||
|
substantial work. Pretrained model weights trained on high-quality
|
||||||
|
domain-specific data are genuinely valuable, even when the
|
||||||
|
architecture is standard.
|
||||||
|
|
||||||
|
**TensorRT compilation is non-trivial engineering.** Converting
|
||||||
|
PyTorch models to TensorRT engines with INT8 quantization for
|
||||||
|
real-time inference requires expertise. The application runs at
|
||||||
|
interactive speeds on consumer GPUs with 11 GB+ VRAM.
|
||||||
|
|
||||||
|
**The product is a real product.** The desktop application, Nuke/
|
||||||
|
Blender/Unreal integrations, cloud API, render queue, EXR output
|
||||||
|
with ACEScg color space support, and overall UX represent
|
||||||
|
substantial product engineering.
|
||||||
|
|
||||||
|
|
||||||
|
## 8. The real question
|
||||||
|
|
||||||
|
Most Beeble Studio users use the application for PBR extractions:
|
||||||
|
alpha mattes, diffuse/albedo, normals, and depth maps. The
|
||||||
|
relighting features exist but are secondary to the extraction
|
||||||
|
workflow for much of the user base.
|
||||||
|
|
||||||
|
The alpha and depth extractions are produced by open-source models
|
||||||
|
used off the shelf. They can be replicated for free using the exact
|
||||||
|
same libraries.
|
||||||
|
|
||||||
|
The PBR extractions (normal, base color, roughness, specular,
|
||||||
|
metallic) use models whose trained weights are proprietary, but
|
||||||
|
whose architecture appears to be built from the same open-source
|
||||||
|
frameworks available to anyone. Open-source alternatives for PBR
|
||||||
|
decomposition now exist (CHORD from Ubisoft, RGB-X from Adobe) and
|
||||||
|
are narrowing the quality gap, though they were trained on different
|
||||||
|
data and may perform differently on portrait subjects.
|
||||||
|
|
||||||
|
See [COMFYUI_GUIDE.md](COMFYUI_GUIDE.md) for a detailed guide on
|
||||||
|
replicating each stage of the pipeline with open-source tools.
|
||||||
|
|
||||||
|
There is a common assumption that the training data represents a
|
||||||
|
significant barrier to replication--that lightstage captures are
|
||||||
|
expensive and rare, and therefore the trained weights are uniquely
|
||||||
|
valuable. This may overstate the difficulty. For PBR decomposition
|
||||||
|
training, what you need is a dataset of images paired with
|
||||||
|
ground-truth PBR maps (albedo, normal, roughness, metallic). Modern
|
||||||
|
3D character pipelines--Unreal Engine MetaHumans, Blender character
|
||||||
|
generators, procedural systems in Houdini--can render hundreds of
|
||||||
|
thousands of such pairs with varied poses, lighting, skin tones, and
|
||||||
|
clothing. The ground truth is inherent: you created the scene, so you
|
||||||
|
already have the PBR maps. With model sizes under 2 GB and standard
|
||||||
|
encoder-decoder architectures, the compute cost to train equivalent
|
||||||
|
models from synthetic data is modest.
|
||||||
|
|
||||||
|
None of this means Beeble has no value. Convenience, polish, and
|
||||||
|
integration are real things people pay for. But the gap between
|
||||||
|
what the marketing says ("Powered by SwitchLight 3.0") and what
|
||||||
|
the application contains (a pipeline of mostly open-source
|
||||||
|
components, some used directly and others used as architectural
|
||||||
|
building blocks) is wider than what users would reasonably expect.
|
||||||
|
And the technical moat may be thinner than investors were led to
|
||||||
|
believe.
|
||||||
|
|
||||||
|
|
||||||
|
## 9. License compliance
|
||||||
|
|
||||||
|
All identified open-source components require attribution in
|
||||||
|
redistributed software. Both the MIT License and Apache 2.0 License
|
||||||
|
require that copyright notices and license texts be included with
|
||||||
|
any distribution of the software.
|
||||||
|
|
||||||
|
No such attribution was found in Beeble Studio's application,
|
||||||
|
documentation, or user-facing materials.
|
||||||
|
|
||||||
|
The scope of the issue extends beyond the core models. The
|
||||||
|
application bundles approximately 48 Python packages in its `lib/`
|
||||||
|
directory. Of these, only 6 include LICENSE files (cryptography,
|
||||||
|
gdown, MarkupSafe, numpy, openexr, triton). The remaining 42
|
||||||
|
packages--including PyTorch, Kornia, Pillow, and others with
|
||||||
|
attribution requirements--have no license files in the distribution.
|
||||||
|
|
||||||
|
For a detailed analysis of each license's requirements and what
|
||||||
|
compliance would look like, see
|
||||||
|
[LICENSE_ANALYSIS.md](LICENSE_ANALYSIS.md).
|
||||||
|
|
||||||
|
|
||||||
|
## 10. Conclusion
|
||||||
|
|
||||||
|
Beeble Studio's Video-to-VFX pipeline is a collection of independent
|
||||||
|
models, most built from open-source components.
|
||||||
|
|
||||||
|
The preprocessing stages are entirely open-source: background removal
|
||||||
|
(InSPyReNet), depth estimation (Depth Anything V2), person detection
|
||||||
|
(RT-DETR with PP-HGNet), face detection (Kornia), multi-object
|
||||||
|
tracking (BoxMOT), edge detection (DexiNed), and super resolution
|
||||||
|
(RRDB-Net). The PBR decomposition models appear to be built on
|
||||||
|
open-source architectural frameworks (segmentation_models_pytorch,
|
||||||
|
timm backbones) with proprietary trained weights.
|
||||||
|
|
||||||
|
The name "SwitchLight" does not appear anywhere in the application--
|
||||||
|
not in the engine binary, not in the setup binary, not in the
|
||||||
|
Electron app's 667 JavaScript files. It is a marketing name that
|
||||||
|
refers to no identifiable software component.
|
||||||
|
|
||||||
|
The CVPR 2024 paper describes a physics-based inverse rendering
|
||||||
|
architecture. The deployed application contains no evidence of
|
||||||
|
physics-based rendering code at inference time. The most likely
|
||||||
|
explanation is that the physics (Cook-Torrance rendering) was used
|
||||||
|
during training as a loss function, and the deployed model is a
|
||||||
|
standard feedforward network that learned to predict PBR channels
|
||||||
|
from that training process.
|
||||||
|
|
||||||
|
Beeble's marketing attributes the entire pipeline to SwitchLight
|
||||||
|
3.0. The evidence shows that alpha mattes come from InSPyReNet, depth
|
||||||
|
maps come from Depth Anything V2, person detection comes from
|
||||||
|
RT-DETR, tracking comes from BoxMOT, and the PBR models are built on
|
||||||
|
segmentation_models_pytorch with PP-HGNet and ResNet backbones. The
|
||||||
|
"true end-to-end video model" claim is contradicted by the
|
||||||
|
independent processing flags and frame stride parameter observed in
|
||||||
|
the application.
|
||||||
|
|
||||||
|
Of the approximately 48 Python packages bundled with the application,
|
||||||
|
only 6 include license files. The core open-source models' licenses
|
||||||
|
require attribution that does not appear to be provided.
|
||||||
|
|
||||||
|
These findings can be independently verified using the methods
|
||||||
|
described in [VERIFICATION_GUIDE.md](VERIFICATION_GUIDE.md).
|
||||||
253
docs/VERIFICATION_GUIDE.md
Normal file
253
docs/VERIFICATION_GUIDE.md
Normal file
@ -0,0 +1,253 @@
|
|||||||
|
# Verification Guide
|
||||||
|
|
||||||
|
This document explains how to independently verify the claims made in
|
||||||
|
this repository using standard system tools. Everything described here
|
||||||
|
is non-destructive, legal, and requires only a licensed installation
|
||||||
|
of Beeble Studio on Linux.
|
||||||
|
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- A Linux system with Beeble Studio installed (RPM distribution)
|
||||||
|
- Basic familiarity with the terminal
|
||||||
|
- No special tools required beyond what ships with most Linux
|
||||||
|
distributions
|
||||||
|
|
||||||
|
|
||||||
|
## Method 1: String search on the binary
|
||||||
|
|
||||||
|
The `strings` command extracts human-readable text from any binary
|
||||||
|
file. It ships with every Linux distribution as part of GNU binutils.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Extract all readable strings from the beeble-ai binary
|
||||||
|
strings /path/to/beeble-ai | grep -i "transparent.background"
|
||||||
|
strings /path/to/beeble-ai | grep -i "inspyrenet"
|
||||||
|
strings /path/to/beeble-ai | grep -i "depth.anything"
|
||||||
|
strings /path/to/beeble-ai | grep -i "dinov2"
|
||||||
|
strings /path/to/beeble-ai | grep -i "segmentation_models"
|
||||||
|
strings /path/to/beeble-ai | grep -i "kornia"
|
||||||
|
strings /path/to/beeble-ai | grep -i "HighPerfGpuNet"
|
||||||
|
strings /path/to/beeble-ai | grep -i "switchlight"
|
||||||
|
strings /path/to/beeble-ai | grep -i "rt_detr\|rtdetr"
|
||||||
|
strings /path/to/beeble-ai | grep -i "boxmot"
|
||||||
|
strings /path/to/beeble-ai | grep -i "face_detection"
|
||||||
|
strings /path/to/beeble-ai | grep -i "dexined"
|
||||||
|
strings /path/to/beeble-ai | grep -i "rrdbnet\|super_resolution"
|
||||||
|
```
|
||||||
|
|
||||||
|
If the open-source models identified in this analysis are present,
|
||||||
|
you will see matching strings--library names, docstrings, model
|
||||||
|
checkpoint references, and configuration data.
|
||||||
|
|
||||||
|
You can also search for Python package paths:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
strings /path/to/beeble-ai | grep "\.py" | grep -i "timm\|kornia\|albumentations"
|
||||||
|
```
|
||||||
|
|
||||||
|
To verify the architecture findings, search for TensorRT plugin
|
||||||
|
names and quantized backbone patterns:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
strings /path/to/beeble-ai | grep "_TRT"
|
||||||
|
strings /path/to/beeble-ai | grep "int8_resnet"
|
||||||
|
strings /path/to/beeble-ai | grep -i "encoder_name\|decoder_channels"
|
||||||
|
```
|
||||||
|
|
||||||
|
To verify the absence of physics-based rendering terminology:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# These searches should return no results
|
||||||
|
strings /path/to/beeble-ai | grep -i "cook.torrance\|brdf\|albedo"
|
||||||
|
strings /path/to/beeble-ai | grep -i "specular_net\|normal_net\|render_net"
|
||||||
|
strings /path/to/beeble-ai | grep -i "switchlight"
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Method 2: Process memory inspection
|
||||||
|
|
||||||
|
When the application is running, you can see what shared libraries
|
||||||
|
and files it has loaded.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find the beeble process ID
|
||||||
|
pgrep -f beeble-ai
|
||||||
|
|
||||||
|
# List all memory-mapped files
|
||||||
|
cat /proc/<PID>/maps | grep -v "\[" | awk '{print $6}' | sort -u
|
||||||
|
|
||||||
|
# Or use lsof to see open files
|
||||||
|
lsof -p <PID> | grep -i "model\|\.so\|python"
|
||||||
|
```
|
||||||
|
|
||||||
|
This shows which shared libraries (.so files) are loaded and which
|
||||||
|
files the application has open. Look for references to Python
|
||||||
|
libraries, CUDA/TensorRT files, and model data.
|
||||||
|
|
||||||
|
For deeper analysis, you can extract strings from the running
|
||||||
|
process memory:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Dump readable strings from process memory
|
||||||
|
strings /proc/<PID>/mem 2>/dev/null | grep -i "HighPerfGpuNet"
|
||||||
|
strings /proc/<PID>/mem 2>/dev/null | grep -i "encoder_name"
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Method 3: RPM package contents
|
||||||
|
|
||||||
|
If Beeble Studio was installed via RPM, you can inspect the package
|
||||||
|
contents without running the application.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all files installed by the package
|
||||||
|
rpm -ql beeble-studio
|
||||||
|
|
||||||
|
# Or if you have the RPM file
|
||||||
|
rpm -qlp beeble-studio-*.rpm
|
||||||
|
```
|
||||||
|
|
||||||
|
This reveals the directory structure, installed Python libraries,
|
||||||
|
and bundled model files.
|
||||||
|
|
||||||
|
|
||||||
|
## Method 4: Manifest inspection
|
||||||
|
|
||||||
|
The application downloads a manifest file during setup that lists
|
||||||
|
the models it uses. If you have a copy of this file (it is
|
||||||
|
downloaded to the application's data directory during normal
|
||||||
|
operation), you can inspect it directly:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Pretty-print and search the manifest
|
||||||
|
python3 -m json.tool manifest.json | grep -i "model\|name\|type"
|
||||||
|
```
|
||||||
|
|
||||||
|
A copy of this manifest is included in this repository at
|
||||||
|
`evidence/manifest.json`.
|
||||||
|
|
||||||
|
|
||||||
|
## Method 5: Library directory inspection
|
||||||
|
|
||||||
|
The application's `lib/` directory contains all bundled Python
|
||||||
|
packages. You can inventory them directly:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all top-level packages
|
||||||
|
ls /path/to/beeble-studio/lib/
|
||||||
|
|
||||||
|
# Check for license files
|
||||||
|
find /path/to/beeble-studio/lib/ -name "LICENSE*" -o -name "COPYING*"
|
||||||
|
|
||||||
|
# Check specific library versions
|
||||||
|
ls /path/to/beeble-studio/lib/ | grep -i "torch\|timm\|kornia"
|
||||||
|
|
||||||
|
# Count packages with and without license files
|
||||||
|
total=$(ls -d /path/to/beeble-studio/lib/*/ | wc -l)
|
||||||
|
licensed=$(find /path/to/beeble-studio/lib/ -maxdepth 2 \
|
||||||
|
-name "LICENSE*" | wc -l)
|
||||||
|
echo "$licensed of $total packages have license files"
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Method 6: Electron app inspection
|
||||||
|
|
||||||
|
Beeble Studio's desktop UI is an Electron application. The compiled
|
||||||
|
JavaScript source is bundled in the application's `resources/`
|
||||||
|
directory and is not obfuscated. You can extract and read it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find the app's asar archive or dist directory
|
||||||
|
find /path/to/beeble-studio/ -name "*.js" -path "*/dist/*"
|
||||||
|
|
||||||
|
# Search for CLI flag construction
|
||||||
|
grep -r "run-pbr\|run-alpha\|run-depth\|pbr-stride" /path/to/dist/
|
||||||
|
|
||||||
|
# Search for output channel definitions
|
||||||
|
grep -r "basecolor\|normal\|roughness\|specular\|metallic" /path/to/dist/
|
||||||
|
```
|
||||||
|
|
||||||
|
This reveals how the UI passes arguments to the engine binary,
|
||||||
|
confirming that alpha, depth, and PBR are independent processing
|
||||||
|
stages.
|
||||||
|
|
||||||
|
|
||||||
|
## Method 7: PyInstaller module listing
|
||||||
|
|
||||||
|
The `beeble-ai` binary is a PyInstaller-packaged Python application.
|
||||||
|
You can list the bundled Python modules without executing the binary:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# PyInstaller archives have a table of contents that lists
|
||||||
|
# all bundled modules. Several open-source tools can extract this.
|
||||||
|
# Look for the pyarmor runtime and obfuscated module names
|
||||||
|
strings /path/to/beeble-ai | grep "pyarmor_runtime"
|
||||||
|
strings /path/to/beeble-ai | grep -E "^[a-z0-9]{8}\." | head -20
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## What to look for
|
||||||
|
|
||||||
|
When running these commands, look for:
|
||||||
|
|
||||||
|
- **Library docstrings**: Complete API documentation strings from
|
||||||
|
open-source packages (e.g., the `transparent-background` library's
|
||||||
|
output type options: 'rgba', 'green', 'white', 'blur', 'overlay')
|
||||||
|
|
||||||
|
- **Model checkpoint names**: References to specific pretrained model
|
||||||
|
files (e.g., `dinov2_vits14_pretrain.pth`,
|
||||||
|
`depth-anything-v2-large`)
|
||||||
|
|
||||||
|
- **Package URLs**: GitHub repository URLs for open-source projects
|
||||||
|
|
||||||
|
- **Python import paths**: Module paths like `timm.models`,
|
||||||
|
`kornia.onnx`, `segmentation_models_pytorch.decoders`
|
||||||
|
|
||||||
|
- **Runtime warnings**: Messages from loaded libraries (e.g.,
|
||||||
|
`WARNING:dinov2:xFormers not available`)
|
||||||
|
|
||||||
|
- **TensorRT plugin names**: Custom plugins like
|
||||||
|
`DisentangledAttention_TRT`, `RnRes2FullFusion_TRT` that identify
|
||||||
|
specific architectures
|
||||||
|
|
||||||
|
- **Quantization patterns**: Strings like
|
||||||
|
`int8_resnet50_stage_2_fusion` that reveal which backbones are
|
||||||
|
compiled for inference
|
||||||
|
|
||||||
|
- **Detection pipeline modules**: Full module paths for the
|
||||||
|
detection/tracking pipeline (e.g., `kornia.contrib.models.rt_detr`,
|
||||||
|
`kornia.models.tracking.boxmot_tracker`,
|
||||||
|
`kornia.contrib.face_detection`)
|
||||||
|
|
||||||
|
- **Absent terminology**: The absence of physics-based rendering
|
||||||
|
terms (Cook-Torrance, BRDF, spherical harmonics) throughout the
|
||||||
|
binary is itself a finding, given that the CVPR paper describes
|
||||||
|
such an architecture
|
||||||
|
|
||||||
|
- **Absent branding**: The term "SwitchLight" does not appear
|
||||||
|
anywhere in the binary, the setup binary, or the Electron app
|
||||||
|
source code. This can be verified by searching all three
|
||||||
|
codebases
|
||||||
|
|
||||||
|
- **License file gap**: Counting LICENSE files in the `lib/`
|
||||||
|
directory versus total packages reveals the scope of missing
|
||||||
|
attribution
|
||||||
|
|
||||||
|
|
||||||
|
## What not to do
|
||||||
|
|
||||||
|
This guide is limited to observation. The following activities are
|
||||||
|
unnecessary for verification and may violate Beeble's terms of
|
||||||
|
service:
|
||||||
|
|
||||||
|
- Do not attempt to decrypt the `.enc` model files
|
||||||
|
- Do not decompile or disassemble the `beeble-ai` binary
|
||||||
|
- Do not bypass or circumvent the Pyarmor code protection
|
||||||
|
- Do not intercept network traffic between the app and Beeble's
|
||||||
|
servers
|
||||||
|
- Do not extract or redistribute any model files
|
||||||
|
|
||||||
|
The methods described above are sufficient to confirm which
|
||||||
|
open-source components are present and what architectural patterns
|
||||||
|
they suggest. There is no need to go further than that.
|
||||||
37464
evidence/manifest.json
Normal file
37464
evidence/manifest.json
Normal file
File diff suppressed because it is too large
Load Diff
174
evidence/marketing_claims.md
Normal file
174
evidence/marketing_claims.md
Normal file
@ -0,0 +1,174 @@
|
|||||||
|
# Beeble Marketing Claims Archive
|
||||||
|
|
||||||
|
Exact quotes from Beeble's public-facing pages, archived January 2026.
|
||||||
|
These are provided for reference so readers can compare marketing
|
||||||
|
language against the technical findings documented in this repository.
|
||||||
|
|
||||||
|
All quotes are reproduced verbatim. Emphasis (bold) is preserved as
|
||||||
|
it appears on the original pages.
|
||||||
|
|
||||||
|
|
||||||
|
## beeble.ai/beeble-studio
|
||||||
|
|
||||||
|
Page title: "Production-Grade 4K AI Relighting, Fully On-Device"
|
||||||
|
|
||||||
|
Main heading: "4K Relighting Fully on Your Machine"
|
||||||
|
|
||||||
|
Subheading: "DESKTOP APP FOR LOCAL PROCESSING"
|
||||||
|
|
||||||
|
Feature description (Local AI Model section):
|
||||||
|
> Run our most advanced **Video-to-PBR model, SwitchLight 3**, directly
|
||||||
|
> on your desktop. Perfect for professionals who need local workflow.
|
||||||
|
|
||||||
|
Video-to-Assets feature card:
|
||||||
|
> **PBR, Alpha & Depth Map Generation**
|
||||||
|
> Powered by **SwitchLight 3.0**, convert images and videos into
|
||||||
|
> **full PBR passes with alpha and depth maps** for seamless
|
||||||
|
> relighting, background removal, and advanced compositing.
|
||||||
|
|
||||||
|
FAQ on the same page ("What is Beeble Studio?"):
|
||||||
|
> **Beeble Studio** is our desktop application that runs entirely on
|
||||||
|
> your local hardware. It provides:
|
||||||
|
> - Local GPU Processing: Process up to 4K and 1 hour using your
|
||||||
|
> NVIDIA GPU
|
||||||
|
> - Unlimited Rendering: No credit consumption for Video-to-VFX
|
||||||
|
> - Complete Privacy: Your files never leave your machine
|
||||||
|
>
|
||||||
|
> Same AI Models: Access to SwitchLight 3.0 and all core features
|
||||||
|
|
||||||
|
Source: https://beeble.ai/beeble-studio
|
||||||
|
|
||||||
|
|
||||||
|
## beeble.ai/research/switchlight-3-0-is-here
|
||||||
|
|
||||||
|
Published: November 5, 2025
|
||||||
|
|
||||||
|
Headline: "Introducing the best Video-to-PBR model in the world"
|
||||||
|
|
||||||
|
Opening paragraph:
|
||||||
|
> SwitchLight 3.0 is the best Video-to-PBR model in the world. It
|
||||||
|
> delivers unmatched quality in generating physically based rendering
|
||||||
|
> (PBR) passes from any video, setting a new standard for VFX
|
||||||
|
> professionals and filmmakers who demand true production-level
|
||||||
|
> relighting.
|
||||||
|
|
||||||
|
On the "true video model" claim:
|
||||||
|
> **True Video Model:** For the first time, SwitchLight is a **true
|
||||||
|
> video model** that processes multiple frames simultaneously. Earlier
|
||||||
|
> versions relied on single-frame image processing followed by a
|
||||||
|
> separate deflicker step. In version 3.0, temporal consistency is
|
||||||
|
> built directly into the model, delivering smoother, more stable
|
||||||
|
> results while preserving sharp detail.
|
||||||
|
|
||||||
|
On training data:
|
||||||
|
> **10x Larger Training Set:** Trained on a dataset ten times bigger,
|
||||||
|
> SwitchLight 3.0 captures a broader range of lighting conditions,
|
||||||
|
> materials, and environments for more realistic results.
|
||||||
|
|
||||||
|
Detail quality claim:
|
||||||
|
> SwitchLight 3.0 achieves a new level of detail and visual clarity.
|
||||||
|
> Compared to 2.0, it captures **finer facial definition, intricate
|
||||||
|
> fabric wrinkles, and sophisticated surface patterns** with far
|
||||||
|
> greater accuracy.
|
||||||
|
|
||||||
|
Motion handling:
|
||||||
|
> SwitchLight 3.0 is a **true end-to-end video model** that
|
||||||
|
> understands motion natively. Unlike SwitchLight 2.0, which relied
|
||||||
|
> on an image model and post-smoothing, it eliminates **flicker and
|
||||||
|
> ghosting** even in extreme motion, shaking cameras, or vibrating
|
||||||
|
> subjects.
|
||||||
|
|
||||||
|
Source: https://beeble.ai/research/switchlight-3-0-is-here
|
||||||
|
|
||||||
|
|
||||||
|
## docs.beeble.ai/help/faq
|
||||||
|
|
||||||
|
"Is Beeble's AI trained responsibly?":
|
||||||
|
> **Yes.** Beeble's proprietary models--such as **SwitchLight**--are
|
||||||
|
> trained only on ethically sourced, agreement-based datasets. We
|
||||||
|
> never use scraped or unauthorized data.
|
||||||
|
>
|
||||||
|
> When open-source models are included, we choose them
|
||||||
|
> carefully--only those with published research papers that disclose
|
||||||
|
> their training data and carry valid commercial-use licenses.
|
||||||
|
|
||||||
|
"What is Video-to-VFX?":
|
||||||
|
> **Video-to-VFX** uses our foundation model, **SwitchLight 3.0**,
|
||||||
|
> and SOTA AI models to convert your footage into VFX-ready assets
|
||||||
|
> by generating:
|
||||||
|
> - **PBR Maps:** Normal, Base color, Metallic, Roughness, Specular
|
||||||
|
> for relighting
|
||||||
|
> - **Alpha:** foreground matte for background replacement
|
||||||
|
> - **Depth Map:** For compositing and 3D integration
|
||||||
|
|
||||||
|
Source: https://docs.beeble.ai/help/faq
|
||||||
|
|
||||||
|
|
||||||
|
## docs.beeble.ai/beeble-studio/video-to-vfx
|
||||||
|
|
||||||
|
Product documentation page:
|
||||||
|
> **Video-to-VFX** uses our foundation model, SwitchLight 3.0, to
|
||||||
|
> convert your footage into VFX-ready assets.
|
||||||
|
|
||||||
|
Key Features section:
|
||||||
|
> **PBR, Alpha & Depth Pass Generation**
|
||||||
|
> Powered by **SwitchLight 3.0**. Convert footage into full PBR
|
||||||
|
> passes with alpha and depth maps.
|
||||||
|
|
||||||
|
Source: https://docs.beeble.ai/beeble-studio/video-to-vfx
|
||||||
|
|
||||||
|
|
||||||
|
## Investor and press coverage
|
||||||
|
|
||||||
|
### Seed funding (July 2024)
|
||||||
|
|
||||||
|
Beeble raised a $4.75M seed round at a reported $25M valuation. The
|
||||||
|
round was led by Basis Set Ventures and Fika Ventures. At the time
|
||||||
|
of funding, the company had approximately 7 employees.
|
||||||
|
|
||||||
|
Press coverage from the funding round:
|
||||||
|
|
||||||
|
> Beeble [...] has raised $4.75 million in seed funding to develop
|
||||||
|
> its **foundational model** for AI-powered relighting in video.
|
||||||
|
|
||||||
|
Source: TechCrunch and other outlets, July 2024
|
||||||
|
|
||||||
|
Investor quotes (public press releases):
|
||||||
|
|
||||||
|
Basis Set Ventures described SwitchLight as a "world-class
|
||||||
|
foundational model in lighting" in their investment rationale.
|
||||||
|
|
||||||
|
The term "foundational model" is significant. In the AI industry,
|
||||||
|
it implies a large-scale, general-purpose model trained from scratch
|
||||||
|
on diverse data--models like GPT-4, DALL-E, or Stable Diffusion.
|
||||||
|
The technical evidence suggests Beeble's pipeline is a collection of
|
||||||
|
open-source models (some used directly, others used as architectural
|
||||||
|
building blocks) with proprietary weights trained on domain-specific
|
||||||
|
data. Whether this constitutes a "foundational model" is a
|
||||||
|
characterization question, but it is a characterization that was
|
||||||
|
used to secure investment.
|
||||||
|
|
||||||
|
As of January 2026, the company appears to have approximately 9
|
||||||
|
employees.
|
||||||
|
|
||||||
|
|
||||||
|
## Notable patterns
|
||||||
|
|
||||||
|
Beeble's marketing consistently attributes the entire Video-to-VFX
|
||||||
|
pipeline to SwitchLight 3.0. The Beeble Studio page states that PBR,
|
||||||
|
alpha, and depth map generation are all "Powered by SwitchLight 3.0."
|
||||||
|
|
||||||
|
The FAQ is the only place where Beeble acknowledges the use of
|
||||||
|
open-source models, stating they "choose them carefully" and select
|
||||||
|
those with "valid commercial-use licenses." However, the FAQ does not
|
||||||
|
name any of the specific open-source models used, nor does it clarify
|
||||||
|
which pipeline stages use open-source components versus SwitchLight.
|
||||||
|
|
||||||
|
The overall marketing impression is that SwitchLight is responsible
|
||||||
|
for all output passes. The technical reality, as documented in this
|
||||||
|
repository's analysis, is that background removal (alpha) and depth
|
||||||
|
estimation are produced by open-source models used off the shelf,
|
||||||
|
and the PBR decomposition models appear to be architecturally built
|
||||||
|
from open-source frameworks (segmentation_models_pytorch, timm
|
||||||
|
backbones) with proprietary trained weights. See
|
||||||
|
[docs/REPORT.md](../docs/REPORT.md) for the full analysis.
|
||||||
Loading…
x
Reference in New Issue
Block a user