Portable, physics-driven objects for generative video
Four research threads are converging: gaussian splatting, neural rendering, physics simulation, and scene decomposition. SplatForge maps the collision point and builds what emerges.
Gaussian Splatting
From SIGGRAPH 2023 to Superman VFX in under two years. 3DGS is the new primitive for real-time photorealistic rendering, now entering production pipelines at OTOY, Foundry Nuke, and Unreal Engine.
Neural Rendering Compositing
Compositional NeRFs, object-aware radiance fields, and multi-view diffusion models are making it possible to render, relight, and composite individual neural objects into arbitrary scenes.
Object-Centric Decomposition
DynaVol-S, OSFs, and structured scene graphs decompose complex environments into individually controllable objects with semantic understanding, physics properties, and independent motion.
Physics-Based Material Sim
MPM, FEM, and position-based dynamics integrated directly into gaussian splat representations. PhysGaussian, GaussianFluent, and PIDG prove elastic, fluid, and granular materials work natively.
The research exists.
The market exists.
The product doesn't.
Luma AI raised $1B for gaussian splatting capture and video generation. Tesla uses 3DGS for world simulation at 220ms per scene. Zillow ships it to millions of home buyers.
But nobody is building portable, physics-aware objects designed to be dropped into generative video pipelines. Not Sora. Not Veo. Not Runway. They generate pixels. We forge objects.
SplatForge sits at the convergence of these four threads, tracking the research and building the tools to turn gaussian splats into compositable, physics-driven digital matter.
Objects that know their physics, rendered in real time, composited into any world.
SplatForge is building the bridge between physically-based simulation research and the generative video revolution. The four threads converge here.