Amogh Joshi
I am an undergraduate student at Princeton University, studying electrical and computer engineering. I am broadly interested in computer vision and graphics, especially applied to robotics and autonomous systems.
At Princeton, I’m in the Computational Imaging Lab, where my research primarily focuses on neural rendering and scene reconstruction for autonomous driving. I also work at Torc Robotics on neural data driven simulation for autonomous trucks.
Previously, I was at Monarch Tractor, where I worked on autonomous navigation in farms and operator safety & awareness. I have also been involved with the AI Institute for Food Systems and principally lead the development of AgML. For an extended bio, click here.
×
At Princeton, I am part of the Computational Imaging Lab. Primarily, I work on neural rendering and scene reconstruction, particularly for autonomous driving. I have also worked on a variety of projects across the spectrum of computational imaging, including computational photography and optics. Throughout my research, I have collaborated with Mercedes-Benz, Torc Robotics, and Colgate-Palmolive Research. Also at Princeton, I am a member of the Neuroscience of Cognitive Control Lab, where I work on developing approaches grounded in computational cognitive science to better understand VLMs and LLms, and develop more powerful artficial intelligence.
From 2022 to 2024, I worked as an autonomous driving engineer at Monarch Tractor. I was heavily involved in the development of the current Row Follow models in vineyards and dairy farms, and I principally developed the Invisible Bucket feature for operator awareness. I also worked on the Follow Me mode, multi-object and multi-person tracking, and briefly, on simulation for vineyards.
I am an affiliate researcher with the AI Institute for Next-Generation Food Systems, and I have principally led the development of AgML, the world’s largest software dedicated to agricultural machine learning data and models, since 2021. I’ve also been a member of the Plant AI and Biophysics Lab (advised by Mason Earles), where I’ve worked on improving data efficiency and generalizability of computer vision models for agricultural applications, as well as generative modeling of crops. Additionally, I have been a member of Project GEMINI, where I worked on procedural 3D crop modeling for sub-Saharan African crops.
In the more distant past, I was a member of the Information Ecosystems Lab at the University of Maryland (advised by Professor Cody Buntain) where I researched digital information sharing behaviors of U.S. and foreign politicians, with a focus on image content shared on social media.
GitHub /
Google Scholar /
LinkedIn /
Contact
×
To contact me, please use: amoghjoshi [at] princeton.edu. You can also find me on Twitter at @amogh7joshi.
Replace [at] with @ to email. This helps reduce spam.
|
|
-
October 2024 I was invited to give a lecture on AgML and agricultural machine learning at New Mexico State University.
-
July 2024 Neural Light Spheres was accepted to SIGGRAPH Asia 2024.
-
[Oct 2024]: I was invited to give a lecture on AgML and agricultural machine learning at New Mexico State University.
-
[Jul 2024]: Neural Light Spheres was accepted to SIGGRAPH Asia 2024.
-
Show fewer news items...
|
For a full background on my research experience, see my extended bio.
|
|
Declan Campbell, Sunayana Rane, Tyler Giallanza, Nicolò De Sabbata, Kia Ghods, Amogh Joshi, Alexander Ku, Steven M. Frankland, Thomas L. Griffiths, Jonathan D. Cohen, Taylor W. Webb
NeurIPS, 2024
publication / arXiv
We identify that state-of-the-art VLMs fail at basic multi-object reasoning due to the binding problem, which limits simultaneous entity representation, similar to human brain processing.
|
|
|
Ilya Chugunov, Amogh Joshi, Kiran Murthy, Francois Bleibel, Felix Heide
SIGGRAPH Asia, 2024
project page / publication / arXiv
We design a spherical neural light field model for implicit panoramic image stitching and re-rendering, capable of handling depth parallax, view-dependent lighting, and scene motion.
|
|
|
Amogh Joshi, Cody Buntain
ICWSM, 2024
publication / arXiv
We investigate how US national politicians' use of various visual media on Twitter reflects their political positions, identifying limitations in standard image characterization methods.
|
|
|
Dario Guevara, Amogh Joshi, Pranav Raja, Elisabeth Forrestel, Brian Bailey, Mason Earles
ISVC, 2023
publication
We present an open-source simulation toolbox designed for the easy generation of synthetic labeled data for both RGB imagery and point cloud information, applicable to a wide array of cultivars.
|
|
|
Amogh Joshi, Dario Guevara, Mason Earles
Plant Phenomics, 2023
publication / arXiv
We present methods for enhancing data efficiency in agricultural computer vision, which improves performance and reduces training time, and introduce a novel set of model benchmarks.
|
|
|
Amogh Joshi, Cody Buntain
ICWSM, 2022
publication / arXiv / press
We develop models to analyze the ideological presentation of foreign Twitter accounts based on shared images, revealing inconsistencies in ideological positions across different content types.
|
The following are major projects which I have been involved in or developed myself.
|
|
AI Institute for Next Generation Food Systems
project / info
Since its inception, I have led the development of AgML. We have aggregated the world's largest collection of agricultural deep learning datasets, produced benchmarks and pretrained weights for state-of-the-art models, and developed a suite of tools for data preprocessing, model training, and deployment in an easy-to-use API.
|
I am a huge photography enthusiast, and you can check out some of my work here. You can also check out my travel page for more photos of my frequent travels.
I'm also an avid reader; you can check out my reading list for past and current books.
|
Who could have seen this coming, another Jon Barron clone. This has gone too far.
|
|