-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
253 lines (221 loc) · 13.4 KB
/
index.html
File metadata and controls
253 lines (221 loc) · 13.4 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
---
layout: default
title: Home
---
<!--
<div class="divTable">
<div class="divTableBody">
-->
<!--
<div class="divTableRow">
<div class="divTableCell">
<img class="left" style="vertical-align:top" src="cuidpic2.jpg" width="100%">
</div>
<div class="divTableCell">
<p> I am a PhD student in Computer Science at Columbia University, where I am co-advised by <a href="http://www.cs.columbia.edu/~vondrick/" target="_blank">Carl Vondrick</a> and <a href="http://www.cs.toronto.edu/~zemel/inquiry/home.php" target="_blank">Rich Zemel</a>. I am interested in developing AI systems that can reason about the word in complex, human-like ways,
and applying these systems to real-world problems in healthcare and climate change. My research is supported by the <a href="https://www.nsfgrfp.org/" target="_blank">National Science Foundation Graduate Fellowship</a>. </p>
<p> Previously, I completed my BSE in Computer Science at Princeton University with a minor in Applied Mathematics. I worked with <a href="https://www.cs.princeton.edu/~olgarus/" target="_blank">Olga Russakovsky</a>
in the Princeton Visual AI Lab and <a href="https://www.cs.princeton.edu/~rpa/" target="_blank">Ryan Adams</a>. </p>
<p> I am trained in Indian Classical Music - I perform vocal concerts in the US and India, and am passionate about spreading awareness of this art form. I also enjoy yoga, playing tennis, and hiking. <p>
</div>
</div>
-->
<div class="post-padding"></div>
<div class="about-wide">
<div class="about-cols">
<div class="about-img">
<img src="./cuidpic2.jpg" width="100%" alt="profile pic">
</div>
<div class="about-desc">
<p> Hi there! 👋 I am a PhD student in Computer Science at Columbia University, co-advised by <a href="http://www.cs.columbia.edu/~vondrick/" target="_blank">Carl Vondrick</a> and <a href="http://www.cs.columbia.edu/~zemel/" target="_blank">Richard Zemel</a>. I am broadly interested in developing AI systems that are capable of scientific design and discovery. My research has introduced new ML-based approaches for simulation and lab-in-the-loop experimentation.
My doctoral work has been supported by an NSF Graduate Research Fellowship.
<p> Previously, I completed my BSE in Computer Science at Princeton University, with a minor in Applied Mathematics. I had the pleasure of working with <a href="https://www.cs.princeton.edu/~olgarus/" target="_blank">Olga Russakovsky</a>
in the Princeton Visual AI Lab and <a href="https://www.cs.princeton.edu/~rpa/" target="_blank">Ryan P. Adams</a>. </p>
<p> Outside of research, I am a performing Carnatic (South Indian Classical music) vocalist. </p>
<p> <a href="mailto:asm2290@columbia.edu">Email</a> / <a href="/pdfs/CV_ArjunMani.pdf" target="_blank">CV</a> / <a href="https://scholar.google.com/citations?user=oAsR1RQAAAAJ&hl=en" target="_blank">Google Scholar</a> / <a href="https://www.linkedin.com/in/arjun-mani/" target="_blank">LinkedIn</a> </p>
</div>
</div>
</div>
<!--
<div class="about-thin">
<div class="about-row">
<center><img src="./cuidpic2.jpg" width="50%" alt="profile pic" title="Yellowstone 2018"></center>
<h1>about</h1>
<p>Hi! 👋 I'm a first-year PhD student at Stanford University.
I'm interested in computer vision, computer graphics and human-computer interaction,
as well as developing better creative tools.
</p>
</div>
</div>
-->
<!-- <div class="divTableRow">
<div class="divTableCell">
</div>
<div class="divTableCell">
<h2 id="Research">Research</h2>
</div>
</div> -->
<h1 id="Research" style="color:black;padding-top:20px;">Research</h1>
<div class="research">
<div class="research-img">
<img style="height:190px; width:200px" src="../imgs/designopt_web.png">
</div>
<div class="research-desc">
<p style="font-size:18px;font-weight:1000"><a href="https://designopt.cs.columbia.edu/">Few-Shot Design Optimization by Exploiting Auxiliary Information</a></p>
<p><span class="extra-bold">Arjun Mani</span>, <a href="https://www.cs.columbia.edu/~vondrick/">Carl Vondrick</a>, <a href="https://www.cs.columbia.edu/~zemel/">Richard Zemel</a></p>
<p>arXiv preprint. <i>In submission.</i></p>
<p> <a href="https://arxiv.org/abs/2602.12112">[arXiv]</a> / <a href="https://designopt.cs.columbia.edu/">[Website]</a> </p><br>
<p> Our work introduces a more realistic problem setting for lab-in-the-loop design optimization, where an experiment returns high-dimensional `auxiliary’ information beyond a scalar reward. We develop a novel method tailored to this setting and demonstrate that it significantly accelerates design optimization across different domains, such as robot hardware design.</p>
</div>
</div>
<br>
<div class="research">
<div class="research-img">
<img style="height:190px; width:200px" src="../imgs/coralgif3.gif">
</div>
<div class="research-desc">
<p style="font-size:18px;font-weight:1000"><a href="https://surfsup.cs.columbia.edu/">SurfsUp: Learning Fluid Simulation for Novel Surfaces</a></p>
<p><span class="extra-bold">Arjun Mani<sup>*</sup></span>, <a href="https://www.cs.columbia.edu/~ipc2107/">Ishaan Preetam Chandratreya<sup>*</sup></a>, <a href="https://www.cs.toronto.edu/~creager/">Elliot Creager</a>, <a href="https://www.cs.columbia.edu/~vondrick/">Carl Vondrick</a>, <a href="https://www.cs.columbia.edu/~zemel/">Richard Zemel</a></p>
<p>ICCV 2023.</p>
<p> <a href="https://arxiv.org/abs/2304.06197\">[arXiv]</a> / <a href="https://surfsup.cs.columbia.edu/">[Website]</a> </p><br>
<p> We introduce a novel approach for ML-based fluid simulation. While learned GNN models for particle-based simulation struggle to scale to large scenes, our method addresses this limitation by modeling solid surfaces using implicit 3D representations. This approach enables more scalable and accurate simulation of fluid–surface interactions, as well as inverse design of solid surfaces.
</p>
</div>
</div>
<br>
<div class="research">
<div class="research-img">
<img style="height:190px; width:200px" src="../imgs/pointing3.png">
</div>
<div class="research-desc">
<p style="font-size:18px;font-weight:1000"><a href="https://arxiv.org/abs/2011.13681">Point and Ask: Incorporating Pointing into Visual Question Answering</a></p>
<p><span class="extra-bold">Arjun Mani</span>, <a href="https://scholar.google.com/citations?user=ue2n778AAAAJ&hl=en">Will Hinthorn</a>, <a href="https://ai4all.princeton.edu/people/nobline-yoo/">Nobline Yoo</a>, <a href="https://www.cs.princeton.edu/~olgarus/">Olga Russakovsky</a></p>
<p>VQA Workshop, CVPR 2021 (<b>Poster Spotlight</b>). </p>
<p><a href="https://arxiv.org/abs/2011.13681">[arXiv]</a> / <a href="https://github.com/princetonvisualai/pointingqa">[Website]</a></p><br>
<p>We extend Visual Question Answering (VQA) to grounded questions involving pointing gestures and introduce benchmark datasets and model designs for this new question space.</p>
</div>
</div>
<div class="research">
<div class="research-img" style="padding-right:55px;">
<img style="height:170px" src="../imgs/galgebra2.png">
</div>
<div class="research-desc">
<p style="font-size:18px;font-weight:1000"><a href="https://www.pacm.princeton.edu/sites/default/files/pacm_arjunmani_0.pdf">Representing Words in a Geometric Algebra</a></p>
<p><span class="extra-bold">Arjun Mani</span>, <a href="https://www.cs.princeton.edu/~rpa/">Ryan P. Adams</a></p>
<p>Best Overall Project, Princeton Program in Applied Mathematics (PACM)</p>
<p> <a href="https://www.pacm.princeton.edu/sites/default/files/pacm_arjunmani_0.pdf">[Report]</a>
<p>We propose using geometric algebra to embed words as multivectors instead of standard vectors, and demonstrate greater expressivity in word similarity and analogy-solving with this representation.</p>
</div>
</div>
<div class="research">
<div class="research-img" style="padding-right:55px;">
<img style="height:180px;width:190px;" src="../imgs/ieee2.png">
</div>
<div class="research-desc">
<p style="font-size:17px; font-weight:1000"><a href="https://ieeexplore.ieee.org/document/7742396">Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis</a></p>
<p>IEEE Transactions on Medical Imaging, vol. 36, no. 3, March 2017</p>
<p><a href="https://scholar.google.com/citations?user=TXsC5ZkAAAAJ&hl=en">Assaf Hoogi, </a> <span class="extra-bold">Arjun Subramaniam</span>*, <a href="https://sites.google.com/view/rishiv/">Rishi Veerapaneni*</a>, <a href="https://rubinlab.stanford.edu/">Daniel Rubin</a></p>
<p> <a href="./pdfs/TMI.2016.2628084.pdf">[Paper]</a> / <a href="https://ieeexplore.ieee.org/document/7742396">[IEEE page]</a></p><br>
<p>We develop a deep learning-aided approach for medical image segmentation, which achieves significant improvement compared to previous state-of-the-art methods on MRI and CT lesion datasets.</p>
</div>
</div>
<h1 id="Teaching" style="color:black;padding-top:5px;font-size:28px;">Teaching and Service</h1>
<p>TA for "Frontiers of Machine Learning" at Columbia University, Prof. Carl Vondrick (Fall 2025). <br> Seminar class covering frontier research areas in ML (AI agents, post-training, diffusion models, VLAs, etc.)</p>
<p>TA for "Neural Networks and Deep Learning" at Columbia University, Prof. Richard Zemel (Spring 2025). <br> Graduate-level class covering fundamental principles and advanced topics in deep learning.</p>
<p>Reviewer for ICML, ICLR, CVPR.</p>
<!-- <h1 id="Other" style="color:black;">Other</h1>
<p> Check out my <a href="/projects">Projects</a> page for various projects that I have worked or am working on, and my <a href="/music">music</a> page for links to past and upcoming performances. I will be starting up a <a href="/blog">blog</a> soon as well, so keep an eye out for that! </p>
<p>Lastly, this website template owes thanks to <a href="https://sxzhang25.github.io/">Sharon Zhang</a>.</p> -->
<br/>
<p>This website template owes thanks to <a href="https://sxzhang25.github.io/">Sharon Zhang</a>.
<!--
<div class="divTableRow">
<div class="divTableCell">
</div>
<div class="divTableCell">
<h2 id="Projects">Projects</h2>
</div>
</div>
-->
<!--
<div class="divTableRow">
<div class="divTableCell">
<img class="about" src="/imgs/orf569.png" width="100%">
</div>
<div class="divTableCell">
<h3>Expressivity and Data-Dependency of Pruned Networks</h3>
<p>ORF 569 Theory of Deep Learning, Fall 2021</p>
<p>Article coming soon!</p>
<p>Using random label experiments, we examine the extent to which pruning methods at initialization are data-dependent. We also examine the
expressivity of these networks (e.g. their ability to fit true vs. random labels) and analyze their characteristics compared to networks pruned
after training (e.g. using the lottery ticket hypothesis). </p>
</div>
</div>
-->
<!--
<div class="divTableRow">
<div class="divTableCell">
<img class="about" src="/imgs/cos521.png" width="100%">
</div>
<div class="divTableCell">
<h3>Majority Dynamics and Information Aggregation in Networks</h3>
<p>COS 521 Advanced Algorithms, Fall 2020</p>
<p><a href="./pdfs/COS_521_Final_Project_Updated.pdf">Final Paper</a></p>
<p>We provide theory showing that certain graphs will never converge to a correct opinion even if each node is initially biased towards it;
we also empirically examine majority dynamics and the effects of seeding or higher thresholds for changing opinions. </p>
</div>
</div>
-->
<!--
<div class="divTableRow">
<div class="divTableCell">
<img class="about" src="/imgs/cos529.png" width="100%">
</div>
<div class="divTableCell">
<h3>Examining Ambiguity in Human Pointing with Computer Vision</h3>
<p>COS 529 Advanced Computer Vision, Spring 2019</p>
<p><a href="./pdfs/COS529_FinalProject_ArjunMani.pdf">Final Paper</a></p>
<p> I study how to understand pointing gestures that have ambiguous intent, examining how ambiguity can be
explicitly predicted by semantic segmentation models. </p>
</div>
</div>
-->
<!--
<div class="divTableRow">
<div class="divTableCell">
</div>
<div class="divTableCell">
<h2 id="Dev">Dev</h2>
</div>
</div>
-->
<!--
<div class="divTableRow">
<div class="divTableCell">
<img class="about" src="/imgs/aidan.png" width="100%">
</div>
<div class="divTableCell">
<h3>AIDAN: Automated ML and Data Analysis with Voice Commands</h3>
<p><b>Best Overall, HackPrinceton Spring 2018</b></p>
<p><a href="https://devpost.com/software/a-i-d-a-n-ai-to-analyze-your-data-with-your-voice">Devpost Link</a><br/><a href="https://github.com/sragavan99/AIDAN">Code</a></p>
<p>We built a chatbot that can respond to user voice/typed commands in realtime and perform linear regression, machine learning (SVM, logistic regression),
and basic data analysis (mean/mode/etc.). Requires user only to upload a CSV file of the dataset. </p>
</div>
</div>
-->
<!--
<div class="divTableRow">
<div class="divTableCell">
<img class="about" src="/imgs/deepsquat.png" width="100%">
</div>
<div class="divTableCell">
<h3>DeepSquat: Deep Learning to Assess Exercise Technique</h3>
<p><b>Best Health/Fitness Hack, HackPrinceton Fall 2017</b></p>
<p><a href="https://devpost.com/software/deepsquat-deep-learning-tells-you-how-to-squat">Devpost Link</a><br/><a href="https://github.com/arjun-mani/DeepSquat">Code</a></p>
<p>We built an app to assess squat technique, using a deep-learning based pose detection model. </p>
</div>
</div>
-->