Mrinal Verghese

I am a PhD student at the Robotics Institute at Carnegie Mellon University advised by Professor Chris Atkeson. I'm interested in making useful household robots that can learn efficiently in the real world. I am currently interning with an embodied AI team at Meta Reality Labs Research working with Ruta Desai on grounding large language models for task planning and I will return to my PhD in November.

I received my B.S. in Mathematics-Computer Science from UC San Diego where I worked with Professor Michael Yip. I was a TA for Supervised Machine Learning (Cogs 118A) and Deep Learning and Neural Networks (Cogs 181) with Professor Zhuowen Tu. I've also worked with Brain Corp as a research intern.

Email  /  CV  /  Google Scholar  /  Twitter  /  Github

profile photo

My goal is to accelerate robot learning to build useful robot companions that can learn on the job. I'm trying to accomplish this with two main approaches: data-efficient learning leveraging memory and retrieval-based methods, and robot reasoning using pretrained models like large language and vision models. I believe memory-based and other non-parametric learning methods can quickly learn skills and models, transfer them to new scenarios, and handle increasing amounts of data in an online continual learning setting. Leveraging pretrained models trained on internet scale data can give robots high-level reasoning capabilities without the expense of collecting large amounts of robot specific data. Combined, I believe these two appraoches may enable real-world learning agents and effective robot partners. I like working on cooking tasks to validate my approaches. I think cooking tasks are useful, have good existing datasets, and require a diverse set of skills which cannot be explicitly programmed.

Current Projects

Grounding LLMs for Robotics

Large language models have shown promising success in enabling robot reasoning in human environments. However, these models often only get information from a prompt and provided skills or primitives. I want to investiagte how to ground these models with real-world feedback and do so in an efficient manner. I believe solving this problem will greatly improve the performance of LLMs for reasoning in robotics.

Retrieval-Based Policy Learning

Memory-Based or Retrieval-Based learning methods have certain important advatages for policy learning. New data can be incorporated into the model by simpling adding it to the robots memory, and there are no issues with distribution shift in training data. In addition, these methods require minmal training and offer rapid online adaptation. In this work I'm looking at learning diverse policies through a mix of offline videos and robot collected experiences.


Using Memory-Based Learning to Solve Tasks with State-Action Constraints
Mrinal Verghese and Christopher Atkeson
The International Conference on Robotics and Automation, 2023

Tasks with constraints or dependencies between states and actions, such as tasks involving locks or other mechanical blockages, have posed a significant challenge for reinforcement learning algorithms. The sequential nature of these tasks makes obtaining final rewards difficult, and transferring information between task variants using continuous learned values such as weights rather than discrete symbols can be inefficient. In this work we propose a memory-based learning solution that leverages the symbolic nature of the constraints and temporal ordering of actions in these tasks to quickly acquire and transfer high-level information about the task constraints. We evaluate the performance of memory-based learning on both real and simulated tasks with discontinuous constraints between states and actions, and show our method learns to solve these tasks an order of magnitude faster than both model-based and model-free deep reinforcement learning methods.

Here is a video about this work

Configuration Space Decomposition for Scalable Proxy Collision Checking in Robot Planning and Control
Mrinal Verghese, Nikhil Das, Yuheng Zhi, Michael Yip
Robotics and Automation Letters, 2022

Real-time robot motion planning in complex high-dimensional environments remains an open problem. Motion planning algorithms, and their underlying collision checkers, are crucial to any robot control stack. Collision checking takes up a large portion of the computational time in robot motion planning. Existing collision checkers make trade-offs between speed and accuracy and scale poorly to high-dimensional, complex environments. We present a novel space decomposition method using K-Means clustering in the Forward Kinematics space to accelerate proxy collision checking. We train individual configuration space models using Fastron, a kernel perceptron algorithm, on these decomposed subspaces, yielding compact yet highly accurate models that can be queried rapidly and scale better to more complex environments. We demonstrate this new method, called Decomposed Fast Perceptron (D-Fastron), on the 7-DOF Baxter robot producing on average 29x faster collision checks and up to 9.8x faster motion planning compared to state-of-the-art geometric collision checkers.

Model-free Visual Control for Continuum Robot Manipulators via Orientation Adaptation
Mrinal Verghese, Florian Richter, Aaron Gunn, Phil Weissbrod, Michael Yip
The Internationl Symposium on Robotics Research, 2019

We present an orientation adaptive controller to compensate for the effects of highly constrained environments on continuum manipulator actuation. A transformation matrix updated using optimal estimation techniques from optical flow measurements captured by the distal camera is composed with any Jacobian estimation or kinematic model to compensate for these effects.

16-890: Robot Cognition for Manipulation
Spring 2023 (Co-Teaching)

This seminar course will cover a mixture of modern and classical methods for robot cognition. We will review papers related to task planning and control using both symbolic and numeric methods. The goal of this course is to give students an overview of the current state of research on robot cognition.

16-745: Optimal Control and Reinforcement Learning
Spring 2023 (Teaching Assistant)

This is a course about how to make robots move through and interact with their environment with speed, efficiency, and robustness. We will survey a broad range of topics from nonlinear dynamics, linear systems theory, classical optimal control, numerical optimization, state estimation, system identification, and reinforcement learning. The goal is to provide students with hands-on experience applying each of these ideas to a variety of robotic systems so that they can use them in their own research.

16-264: Humanoids
Spring 2022 (Teaching Assistant)

This course surveys perception, cognition, and movement in humans, humanoid robots, and humanoid graphical characters. Application areas include more human-like robots, video game characters, and interactive movie characters.

Cogs 181: Deep Learning and Neural Networks
Spring 2020 (Teaching Assistant)

This course will cover the basics about neural networks, as well as recent developments in deep learning including deep belief nets, convolutional neural networks, recurrent neural networks, long-short term memory, and reinforcement learning. We will study details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification.

Cogs 118A: Supervised Machine Learning (Teaching Assistant)
Winter 2020 (Teaching Assistant)

This course introduces the mathematical formulations and algorithmic implementations of the core supervised machine learning methods. Topics in 118A include regression, nearest neighborhood, decision tree, support vector machine, and ensemble classifiers.