TensoRF Reproduce
A faithful reproduction of the TensoRF neural rendering pipeline, benchmarked on NJIT's Wulver H100 cluster and developed on Arch Linux.
Overview
This project recreates the TensoRF approach for neural radiance fields, focusing on reproducible training scripts, evaluation utilities, and reproducibility notes for modern GPUs. Experiments were run on NJIT's Wulver H100 nodes with an Arch Linux development environment to validate performance and convergence claims from the original paper.
Architecture & Features
Training workflow
Key components that make the reproduction reliable across local and H100 cluster runs.
- Data prep: Scripts for downloading NeRF datasets and standardizing scene bounds for tensor grids.
- Optimization loop: Mixed-precision training with checkpointing tuned for H100 memory profiles.
- Evaluation stage: Batched rendering pipeline that reports PSNR/SSIM and saves side-by-side comparisons.
- Reproducibility notes: Seeded runs, config files, and Arch Linux package pins to remove environment drift.
Tech Stack
Paper-accurate training
Implements the same tensor decomposition strategy and loss configuration from the TensoRF paper for controllable reproduction runs.
Cluster-ready orchestration
Includes scripts tuned for NJIT Wulver H100 GPUs with CUDA optimizations and Slurm-friendly entrypoints.
Evaluation + visualization
Supports PSNR/SSIM reporting, dataset preprocessing, and view synthesis visualizations to compare against published baselines.
Environment notes
Documents Arch Linux setup steps, driver requirements, and dependency pins to reproduce results on fresh machines.
Future Vision
“Next steps include experimenting with alternate tensor rank schedules and comparing throughput across H100 and consumer GPUs.”