GPU-Native Infrastructure for Serious Compute
Simulation, medical imaging, game development, scientific visualization: all share one constraint. The formats underneath them were not built for the data volumes they now handle.
The problem is not compute. Modern GPUs have headroom most pipelines never reach. The bottleneck is I/O: how geometry is stored, loaded, and moved through the stack.
Refactr builds the infrastructure layer that removes it. Format design, pipeline architecture, and GPU-native processing engineered so every stack built on top runs at the speed the hardware was always capable of.
Out of hundreds of teams competing across campus rounds, the team advanced to the Regional stage of the Zindigi Prize, evaluated on technical depth, market potential, and execution readiness.

Seven years at the intersection of graphics and systems. Built a real-time DICOM-to-3D lung visualization pipeline from scratch on a consumer GPU. When the tools he needed didn't exist, he built them. Founded Refactr to turn that instinct into infrastructure.
Built the first half of Refactr's medical pipeline: preprocessing CT scans, segmenting lung tissue, and reconstructing 3D volumes from DICOM data. Unglamorous, load-bearing work that everything else sits on. That foundation shapes how she runs operations. Zero to shipped.
Thinks in hardware, builds in software. FPGA background gave him a rare instinct for where compute gets wasted and how to recover it. Migrated the entire pipeline to CUDA: 73% faster volume generation, 60% faster texture processing. If there's a faster path, he'll find it.
Processing bottlenecks are I/O problems, not compute ones. We rebuild pipelines around GPU memory architecture, eliminating the CPU-bound parsing overhead that stalls serious workloads.
Legacy formats store geometry as human-readable text, parsed character by character on every load. Our format is binary, compressed, and reads directly into memory with no parsing step.
Raw volumetric data to fully processed output, entirely on the GPU. Normals, compression, and format conversion run in parallel on the same hardware your renderer already uses.
| Format | File Size | Read Speed | Write Speed | Lossless |
|---|---|---|---|---|
| Refactr | 396 MB | 510 ms | 1.9 s | |
| OBJ | 1.74 GB | 333 s | 65.5 s | |
| STL | 1.01 GB | 1.9 s | 18.4 s | |
| PLY | 505 MB | 14.8 s | 7.2 s |
Tested on a high-density production mesh. Best of 3 runs. Consumer GPU.
Surgeons navigate millimetre-scale lung structures through flat 2D slices, where misreads are structural, not exceptional. 3D reconstruction changes surgical outcomes. Refactr built the proof of concept: a full pipeline from raw CT scan data to real-time 3D lung visualization with tumour localisation, running on a consumer GPU.
Asset pipelines at scale spend more time moving geometry than rendering it. OBJ files that take minutes to load, format conversions that block the build, normals dropped on export. The I/O layer should never be the reason a pipeline stalls.
Mesh data in VFX routinely exceeds tens of millions of polygons per asset. Read and write overhead at that scale compounds across every iteration. Faster I/O means more iterations in the same window, which is where quality actually comes from.
We're building for rendering engineers, technical artists, and compute teams at studios and simulation labs who've already maxed out what their current pipeline can do.