Site in Progress

I am a Master's student at the Robotics Institute, School of Computer Science, Carnegie Mellon University, working with Prof. Howie Choset and Prof. Matthew Travers. I am working on deep-learning to optimize autonomous exploration and navigation with limited perception in unknown dynamic/static environments.

My interests are in the research and development of intelligent systems and integrated robotics technologies to optimize and improve human life. Broadly in machine learning, computer vision, planning and systems engineering particularly in perception, deep-learning and decision-making.

Highlights

Experience

Boeing Blaser Project | Carnegie Mellon University & Boeing Collaboration Graduate Research Assistant | Howie Choset May 2019 – Present abstract | webpage | slides
Designing VIO algorithms for running onboard an i.MX RT platform for developing a standalone miniature size, ultra-short range, and high accuracy laser-camera sensor module for applications in confined space.
Developed a unified calibration procedure for simultaneous non-linear optimization on 9 parameters of camera intrinsics, laser-to-camera and hand-eye calibration.
Redesigned algorithm to speed up procedure run-time by 5x, over <30 images for 640x480 and 1280x960 camera resolutions.
Software: C/C++, Python, Matlab, ROS
Hardware: Blaser Sensor, UR5e
Key Concepts: camera calibration, hand-eye calibration, camera-laser calibration, visual-inertial odometry, short-range high-precision perception


Boeing Sealant Application | Carnegie Mellon University & Boeing Collaboration Motion Software Lead | Howie Choset May 2017 – May 2019 abstract | webpage | slides
Developed and implemented the closed-loop, high-precision motion control infrastructure and software with computer vision feedback for Boeing APADA Arm, automating sealant application and inspection on sub-millimeter cracks in aircraft wing bays. This solution reduces human error and improves worker health-safety as the sealant is noxious.
Developed multiple trajectories for robot arm, based on permutations and combinations in shapes of cones and spirals to maximize scanning coverage area of the mounted sensor and nozzle of robot arm for sealant application.
Implemented tracking and following of finger using short-range high-precision custom laser-camera sensor mounted on UR5e to demonstrate the high-precision closed loop control of the sensor and robot arm for confined space perception and manipulation.
Software: C/C++, ROS
Hardware: Custom 5DOF Arm, UR5e, Blaser sensor
Key Concepts: kinematics, trajectory generation, motion planning, obstacle navigation, short-range computer vision


Amazon Robotics | Advanced Robotics Perception Team Advanced Robotics Intern | Chuck Llyod May – Aug 2018 abstract | media
Developed a computer vision based solution to the multifaceted issue of failed picks in the pick and place pipeline to stow away the items that come into the Amazon warehouse. I investigated and analyzed the existing pipeline to pinpoint the problem, incorrectly perceived 3D geometry caused by “holes” of missing data. This occured due to occlusion, reflectance, and the scanning angle of the static multi-sensor workspace rig. Thus my solution pipeline addressed this problem by:
• Identifying and locating the holes through angle criterion
• Validation of missing data versus physical hole
• Prioritizing filling order and executing re-planned picking path for scanning
The solution implemented is a hybrid pipeline using a sensor mounted on the end-effector of the robot to fill in data after detecting and confirming the existence of an area with missing data from image and point cloud space.
Implemented the static sensors camera rig and environmental constraints of as-is implementation on the workstation; designed and 3D printed end-of-arm sensor mount. Demonstrated integrated modular pipeline on physical system.
Also developed an alternate solution using classical techniques through PCL of surface reconstruction including triangulation, poisson reconstruction, voxel dilation etc; Pitched alternate solution for fitting shape primitives to approximate shape and used machine learning models to predict the geometry of the object.
Software: C/C++, ROS
Hardware: IntelRealsense D410, Ensenso, UR10
Key Concepts: sensor-fusion, motion planning, computer vision, point cloud to image reprojection


FOXCONN IMR Fast-Vision | Carnegie Mellon University & FOXCONN Research Assistant May – Aug 2017 abstract | slides | media
Developed in-lab low-cost optical-flow sensor, containing two high fps cameras and an IMU, for high-speed localization of FOXCONN modular robots for multi-purpose use in dense industrial environment with serial communication and dynamic parameterization with ROS. Analyzed and selected apt off-the-shelf OEM barcode scanners, to develop this sensor
Software: Arduino, C/C++, ROS
Hardware: IMU, Monocular Cameras
Key Concepts: optical-flow, sensor-fusion, localization


Summer Undergraduate Research Fellowship Research Assistant | Howie Choset May – Aug 2017 abstract | media
Awarded the Summer Undergraduate Research Fellowship 2017 for “Stable Stair Climbing for a Hexapod using Onboard Sensing” under Prof. Howie Choset and Dr. Guillaume. Researched and developed a gait, using Central Pattern Generators, for a hexapod robot in general terrain that is parameterizable by dimensions of obstacle extracted from camera feedback to make an ‘intuitive’ seamless 'walk to climb' transition increasing mobility, particularly for the introduction of legged companion robots in homes.
Awarded the Boeing Blue Skies Award: GameChanger at the Meeting of the Minds Symposium 2018.
Software: Python, ROS
Hardware: Hexapod, Kinect
Key Concepts: cpgs, computer vision, dynamic motion planning


Gearless Omni-directional Acceleration-vectoring Topology Project Undergraduate Researcher | Matt Travers Dec 2016 – Jun 2017 abstract
Developed ROS teleoperation of a robotic omni-directional leg to control motion and jump over obstacles. Built model and simulation for testing scenarios in physics simulator Gazebo
Software: Python, XML, ROS, Gazebo
Hardware: A Quasi-Direct Drive Legged Robot
Key Concepts: simulation, teleoperation


Series Elastic Actuator Robots Undergraduate Researcher | BioRobotics LabSep 2016 – Feb 2017 abstract
Investigated methods for freeing the Snake Robot from a trapped position in uneven terrain and move towards initial heading using signal analysis on in-built torque sensor readings.
Verified the heuristic we proposed to detect the jammed state by detecting repetitive signals from the sensor feedback in the robot links. The low-cost solution implemented as an interim solution was to reduce speed, as the robot was very unlikely to get trapped at lower speeds.
Worked on optimizing LED driver for housing more components or scaling down Snake robot for constrained environments.
Software: Python, Matlab, CircuitMaker
Hardware: Snake robot
Key Concepts: signal-analysis, fourier transforms, circuits


Projects

18-578 Mechatronic Capstone Design: ShipBot Team: David Bang, Fiona Li, Sara Misra, Haowen Shi, Bo Tian Spring 2019 abstract | webpage | report | code
For this project we built an autonomous robot that could autonomously manipulate a fixed set of electro-mechanical devices such as different types of valves and breaker switches given some high-level commands. This project provides a proof-of-concept mechatronic device capable of operating without modifying the existing human-operated devices reducing the cost of retrofitting autonomous navigation onto long-distance ships.
I designed, developed and implemented the localization algorithm, identification of the electromechanical device to manipulate, mounted robotic manipulator arm kinematics and vision-based closed-loop motion planning. Responsible for sensor placement and design of hardware chassis frame, as well as contributed to the cyber-physical and software system.
Software: C/C++, XML, ROS
Hardware: We built the robot!
Key Concepts: autonomy, computer vision, arm kinematics, motion-planning, localization


10-701 Semi-Supervised Learning for Classification Team: Wenyu Huang, Sara Misra, Junyan Pu Spring 2019 abstract | report | poster
Challenge : Forest Type Classification
We predict the forest cover type of an area given cartographical features of the area using a semi-supervised approach to deal with the lack of labeled data. We explore multiple methods of semi-supervised learning, particularly the Naive Bayes and Expectation Maximization algorithm, S3VM, and co-training using neural networks.
Software: Python
Key Concepts: expectation-maximization, SV3M, co-training, neural-networks


16-720 Augmented Reality Team: Naman Gupta, Sara Misra, Olivia Xu Spring 2019 abstract | slides | code
The project goal was to overlay a 3D wavefront obj model onto a video recorded using an iphone camera. This project is influenced by pokemon Go and how we can bring augmented reality to physical Pokemon playing cards.
Software: Python
Hardware: iPhone, Pokemon playing cards
Key Concepts: classification, homography estimation, 3Dto2D projection, ORB descriptors


15-112 Interactive Interview Bot Nov – Dec 2016 abstract | report | video | code
Developed an interactive Interview Bot to minimize human error and bias in interviewing candidates to assess candidate personality and soft-skill fit for the applied position, replacing human interviewers for first-stage interviews.
Utilizing IBM Watson’s AI, the system’s core was an Input-Process-Output-Feedback (Learn) control loop, bringing human-like cognition to machines which could be further developed by using machine learning algorithms.
Features:
• Voice-based interaction with the candidate
• Analyzes the candidate responses to assess their 47 personality traits
• Supports 761 job positions from Researcher to DJ to assess candidates on
• Recommends jobs with best personality fit to the candidate
Software: Python
Frameworks: IBM Watson, Google Text-to-Speech
Key Concepts: discretize human personality, human-computer interaction


PennApps Hack XIV: Take Me Out! Team: Daniel Barychev, Guangyu Chen, Sara Misra, Alex Rudenko Fall 2016 abstract | webpage

Take Me Out presents the user with an option for a night out, simply a button click away. From user response through like and dislike buttons, the app collects data from the user and uses it to make more informed decisions from recommendation to recommendation.
I designed and developed the app design and user interface.


Publication

SmartHealth@PlantIntelligence 29 Dec 2015 abstract
IP.COM DISCLOSURE NUMBER: IPCOM000244611D
I conceptualized and co-authored a cognitive based solution to improve human health by recommending right kind and number of plants based on different influencing parameters e.g. individual/combined families’ demographic, bio-metric, emotional, financial, life-events, social and others including environmental factors/parameters. I published this work as a disclosure through IBM; the ‘intelligence’ of the conceptualized solution could be further enhanced using deep learning algorithms.