Inverse problems like computed tomography, optical inverse rendering, and muon tomography, amongst others, occur in a vast range of scientific, medical, and security applications and are usually solved with highly specific algorithms depending on the task.
Approaching these problems from a physical perspective and reformulating them as a function of particle interactions, enables 3D scene reconstruction in a physically consistent manner across particle-mediated modalities.
Recent developments in differentiable volumetric rendering and optical optimization techniques, such as Neural Radiance Fields, Gaussian Splatting, and Scene Representations Networks (SRN), have been used to demonstrate the feasibility of jointly estimating unknown geometry and material parameters of a 3D scene.
Some works also show the feasibility of modeling refraction and multiple scattering of light using differentiable optimization.
By approaching these problems from a physical perspective and reformulating them in terms of transport and interaction of radiation or particles we can formulate a unified forward model that maps unknown scene parameters to measurements. This way we enable physically consistent 3D reconstruction for interactions that can be modeled by emission–absorption, refraction, and limited multiple scattering.
Directly incorporating these interactions into a differentiable pipeline captured by a parameterized observer, allows decoupling the optimization procedure from both, the specific type of interaction and the capture mechanism.
We perform a first experimental validation of our method using simulated and experimental optical scans from different sensing devices.
Lastly, we explore the inter-domain capability of the new reconstruction method to other inverse problems, including muon tomography imaging.

