You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _bibliography/papers.bib
+14-3Lines changed: 14 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
4
4
@string{aps = {American Physical Society,}}
5
5
6
-
@article{shah2026mimicdroid,
6
+
@inproceedings{shah2026mimicdroid,
7
7
title={MimicDroid: In-Context Learning for Humanoid Robot Manipulation from Human Play Videos},
8
8
author={Shah, Rutav and Liu, Shuijing and Wang, Qi and Jiang, Zhenyu and Kumar, Sateesh and Seo, Mingyo and Mart{\'\i}n-Mart{\'\i}n, Roberto and Zhu, Yuke},
9
9
booktitle={International Conference on Robotics and Automation (ICRA)},
@@ -15,7 +15,7 @@ @article{shah2026mimicdroid
15
15
bibtex_show={true},
16
16
}
17
17
18
-
@article{mandikal2025mash,
18
+
@inproceedings{mandikal2025mash,
19
19
title={Mash, Spread, Slice! Learning to Manipulate Object States via Visual Spatial Progress},
20
20
author={Mandikal, Priyanka and Hu, Jiaheng and Dass, Shivin and Majumder, Sagnik and Mart{\'i}n-Mart{\'i}n*, Roberto and Grauman*, Kristen},
21
21
booktitle={International Conference on Robotics and Automation (ICRA)},
@@ -27,7 +27,7 @@ @article{mandikal2025mash
27
27
bibtex_show={true},
28
28
}
29
29
30
-
@article{li2026momagen,
30
+
@inproceedings{li2026momagen,
31
31
title={MoMaGen: Generating Demonstrations under Soft and Hard Constraints for Multi-Step Bimanual Mobile Manipulation},
32
32
author={Li*, Chengshu and Xu*, Mengdi and Bahety*, Arpit and Yin*, Hang and Jiang, Yunfan and Huang, Huang and Wong, Josiah and Garlanka, Sujay and Gokmen, Cem and Zhang, Ruohan and Liu, Weiyu and Wu, Jiajun and Mart{\'\i}n-Mart{\'\i}n, Roberto and Li, Fei-Fei},
33
33
booktitle={International Conference on Learning Representations (ICLR)},
@@ -39,6 +39,17 @@ @article{li2026momagen
39
39
bibtex_show={true},
40
40
}
41
41
42
+
@inproceedings{dass2026datamil,
43
+
title={DataMIL: Selecting Data for Robot Imitation Learning with Datamodels},
44
+
author={Dass*, Shivin and Khaddaj*, Alaa and Engstrom, Logan and Madry, Aleksander and Ilyas, Andrew and Mart{\'\i}n-Mart{\'\i}n, Roberto},
45
+
booktitle={International Conference on Learning Representations (ICLR)},
Copy file name to clipboardExpand all lines: _pages/about.md
+12-7Lines changed: 12 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,22 +8,27 @@ profile:
8
8
align: right
9
9
image: lab_picture.jpg
10
10
image_circular: false # crops the image to make it circular
11
-
address: >
12
-
<p>2317 Speedway</p>
13
-
<p>Austin, Texas 78712</p>
11
+
image_full_width: true # span image across all columns (like content below)
12
+
# address: >
13
+
# <p>2317 Speedway</p>
14
+
# <p>Austin, Texas 78712</p>
14
15
15
16
news: true # includes a list of news items
16
17
selected_papers: false # includes a list of papers marked as "selected={true}"
17
18
social: true # includes social icons at the bottom of the page
18
19
---
19
20
20
-
RobIn Lab is directed by [Roberto Martín-Martín](https://robertomartinmartin.com/), and is part of the University of Texas at Austin, Department of Computer Science, UTAI Lab and Texas Robotics.
21
+
RobIn Lab, directed by [Roberto Martín-Martín](https://robertomartinmartin.com/), is part of the University of Texas at Austin’s Department of Computer Science and affiliated with [Texas Robotics](https://robotics.utexas.edu/).
21
22
22
-
The research in RobIn explores at the intersection of <u>robotics</u>, <u>computer vision</u> and <u>machine learning</u>, with an emphasis on creating robotic solutions that accomplish <u>interaction-rich</u> tasks in the real world. Our research revolves around the question: what are the mechanisms that enable *physically intelligent behavior* in embodied agents? Our hypothesis is that intelligence is not a purely computational process, but an interplay between perceiving, reasoning, learning and <u>interacting physically</u> with the world.
23
+
RobIn Lab conducts research at the intersection of robotics, computer vision, and machine learning, with a focus on enabling robots to perform interaction-rich tasks in the real world. We study the mechanisms underlying physically intelligent behavior in embodied agents, guided by the hypothesis that intelligence emerges from the tight coupling of perception, reasoning, learning, and physical interaction.
23
24
24
-
Keywords in our research include: learning (reinforcement learning, imitation learning, representation learning), planning (motion planning, task planning, TAMP), control (optimal control, force control), perception (active, interactive, multimodal perception), 2D and 3D reasoning, and Bayesian state estimation, sim2real. Application domains we find exciting involve: mobile manipulation, stationary manipulation, contact-rich manipulation, social and visual navigation.
25
+
Our research spans learning (reinforcement, imitation, and representation learning), planning (motion planning, task planning, TAMP), control (optimal and force control), perception (active, interactive, and multimodal), 2D/3D reasoning, and sim-to-real transfer. We apply these ideas to domains such as mobile and stationary manipulation, contact-rich manipulation, and social and visual navigation.
25
26
26
-
We are always looking for outstanding new members and collaborators for RobIn. Please, fill in [this form](https://docs.google.com/forms/d/e/1FAIpQLSffvYGQ74fz2c-GvBfTGbuXGxupA0Y8Iy4s88UfVu7Gfb1c1A/viewform) and state clearly if you are an UG/MSc/PhD from UT, or a prospective visitor.
27
+
<!--The research in RobIn explores at the intersection of <u>robotics</u>, <u>computer vision</u> and <u>machine learning</u>, with an emphasis on creating robotic solutions that accomplish <u>interaction-rich</u> tasks in the real world. Our research revolves around the question: what are the mechanisms that enable *physically intelligent behavior* in embodied agents? Our hypothesis is that intelligence is not a purely computational process, but an interplay between perceiving, reasoning, learning and <u>interacting physically</u> with the world.
28
+
29
+
Keywords in our research include: learning (reinforcement learning, imitation learning, representation learning), planning (motion planning, task planning, TAMP), control (optimal control, force control), perception (active, interactive, multimodal perception), 2D and 3D reasoning, and Bayesian state estimation, sim2real. Application domains we find exciting involve: mobile manipulation, stationary manipulation, contact-rich manipulation, social and visual navigation. -->
30
+
31
+
<!-- We are always looking for outstanding new members and collaborators for RobIn. Please, fill in [this form](https://docs.google.com/forms/d/e/1FAIpQLSffvYGQ74fz2c-GvBfTGbuXGxupA0Y8Iy4s88UfVu7Gfb1c1A/viewform) and state clearly if you are an UG/MSc/PhD from UT, or a prospective visitor. -->
27
32
28
33
[//]: # Put your address / P.O. box / other info right below your picture. You can also disable any these elements by editing `profile` property of the YAML header of your `_pages/about.md`. Edit `_bibliography/papers.bib` and Jekyll will render your [publications page](/al-folio/publications/) automatically.
Thank you for your interest in working with us! We are always looking for outstanding new members and collaborators for RobIn. Please, fill in [this form](https://docs.google.com/forms/d/e/1FAIpQLSffvYGQ74fz2c-GvBfTGbuXGxupA0Y8Iy4s88UfVu7Gfb1c1A/viewform) and state clearly if you are an UG/MSc/PhD from UT, or a prospective visitor. We actively monitor this form.
0 commit comments