Research
I'm interested in developing spatially intelligent systems—combining generative modeling and neural rendering to reconstruct and synthesize the visual world. My interests lie in 3D generation and neural rendering, at the intersection of computer vision, computer graphics, and robotics.
|
|
Amogh Joshi*, Julian Ost*, Felix Heide
Preprint, 2026
project page / arXiv
We present WorldFlow3D, a novel approach for generating unbounded 3D worlds via latent-free sequential flow matching through 3D data distributions.
|
|
Amir Reza Vazifeh, Congli Wang, Amogh Joshi, Ilya Chugunov, Jipeng Sun, Jiwoon Yeom, Jason W. Fleischer, José S. Pulido, Felix Heide
Optics Express, 2026
project page / publication
We introduce an unsupervised method that reconstructs high-resolution fiber bundle images from misaligned bursts without calibration or paired data.
|
|
Julian Ost*, Andrea Ramazzina*, Amogh Joshi*, Maximilian Bömer, Mario Bijelic, Felix Heide
AAAI, 2026
project page / publication / arXiv
We present LSD-3D, a method for generating 3D driving scenes with coherent 3D geometry and photorealistic, high-fidelity texture.
|
|
Ilya Chugunov, Amogh Joshi, Kiran Murthy, Francois Bleibel, Felix Heide
SIGGRAPH Asia, 2024
project page / publication / arXiv
We design a spherical neural light field model for implicit panoramic image stitching and re-rendering, capable of handling depth parallax, view-dependent lighting, and scene motion.
|
|
I've always been deeply interested in the analysis and production of data, of any sort. I've worked heavily in agrobotics and agricultural machine learning, to scale real data and infrastructure, improve efficiency, and produce synthetic crop data. I've also spent time trying to understand the learning process of VLMs, and correlate patterns in information sharing with ideological insight in social science. See this for more details.
|
|
Naitik Jain, Amogh Joshi, Mason Earles
CVPR Vision for Agriculture, 2025
publication / arXiv
We introduce iNatAg, a 4.7M-image dataset of 2,959 crop and weed species - one of the world's largest for agriculture - and benchmark models achieving state-of-the-art classification performance.
|
|
Declan Campbell, Sunayana Rane, Tyler Giallanza, Nicolò De Sabbata, Kia Ghods, Amogh Joshi, Alexander Ku, Steven M. Frankland, Thomas L. Griffiths, Jonathan D. Cohen, Taylor W. Webb
NeurIPS, 2024
publication / arXiv
We identify that state-of-the-art VLMs fail at basic multi-object reasoning due to the binding problem, which limits simultaneous entity representation - similar to human brain processing.
|
|
Amogh Joshi, Cody Buntain
ICWSM, 2024
publication / arXiv
We investigate how US national politicians' use of various visual media on Twitter reflects their political positions, identifying limitations in standard image characterization methods.
|
|
Dario Guevara, Amogh Joshi, Pranav Raja, Elisabeth Forrestel, Brian Bailey, Mason Earles
ISVC, 2023
publication
We present an open-source simulation toolbox designed for the easy generation of synthetic labeled data for both RGB imagery and point cloud information, applicable to a wide array of cultivars.
|
|
Amogh Joshi, Dario Guevara, Mason Earles
Plant Phenomics, 2023
publication / arXiv
We present methods for enhancing data efficiency in agricultural computer vision, which improves performance and reduces training time, and introduce a novel set of model benchmarks.
|
|
Amogh Joshi, Cody Buntain
ICWSM PhoMemes, 2022
publication / arXiv / press
We develop models to analyze the ideological presentation of foreign Twitter accounts based on shared images, revealing inconsistencies in ideological positions across different content types.
|
|
The following are major projects I've been involved in the development of.
|
|
AI Institute for Next Generation Food Systems
project / info
Since its inception, I have led the development of AgML. We have aggregated the world's largest collection of agricultural deep learning datasets, produced benchmarks and pretrained weights for state-of-the-art models, and developed a suite of tools for data preprocessing, model training, and deployment in an easy-to-use API.
|
×
My research interests lie in computer vision and graphics, especially for autonomous robotics.
At Princeton, I am part of the Computational Imaging Lab, and am advised by Professor Felix Heide. My primary work focuses on neural rendering and scene reconstruction, particularly for autonomous driving. I have also worked on a variety of projects across the spectrum of computational imaging, including computational photography and optics. Throughout my research, I have collaborated with Mercedes-Benz, Torc Robotics, and Colgate-Palmolive Research. Also at Princeton, I have conducted research with the Neuroscience of Cognitive Control Lab, where I worked on developing approaches grounded in computational cognitive science to better understand VLMs and LLMs.
I’m a member of the Data Generation team at Torc Robotics, where I work on scene reconstruction and generation for autonomous trucking environments. From 2022 to 2024, I worked as an autonomous driving engineer at Monarch Tractor. I was heavily involved in the development of the current Row Follow models in vineyards and dairy farms, and I principally developed the Invisible Bucket feature for operator awareness. I also worked on the Follow Me mode, multi-object and multi-person tracking, and briefly, on simulation for vineyards.
I am an affiliate researcher with the AI Institute for Next-Generation Food Systems, and I have principally led the development of AgML, the world’s largest software dedicated to agricultural machine learning data and models, since 2021. I’ve also been a member of the Plant AI and Biophysics Lab (advised by Mason Earles), where I’ve worked on improving data efficiency and generalizability of computer vision models for agricultural applications, as well as generative modeling of crops. Additionally, I have been a member of Project GEMINI, where I worked on procedural 3D crop modeling for sub-Saharan African crops.
In the more distant past, I was a member of the Information Ecosystems Lab at the University of Maryland (advised by Professor Cody Buntain) where I researched digital information sharing behaviors of U.S. and foreign politicians, with a focus on image content shared on social media.
|