Hello, I'm

Abhinav Bhamidipati

Perception · Path Planning · Computer Vision · Machine Learning

Introduction

About

Abhinav Bhamidipati

My name is Abhinav Bhamidipati, working at Inception Robotics, I contribute to developing autonomy stacks and enhancing robot capabilities through visual odometry and sensor fusion techniques. My passion lies in Computer Vision and Machine Learning, with experience in Perception, Path Planning, SLAM, and Reinforcement Learning.

Beyond engineering, I'm creative; indulging in sketching and painting. I'm driven by innovation, eager to collaborate, and excited to make meaningful contributions to robotics and beyond.

Education

Aug 2023 — May 2025

Master of Engineering, Robotics

University of Maryland, College Park, MD

CGPA: 3.84 / 4.0

Aug 2019 — May 2023

Bachelor of Engineering, Electronics & Telecommunication

University of Mumbai, Atharva College of Engineering

CGPA: 8.97 / 10.0

Career

Experience

Founding Robotics Software Engineer

Inception Robotics, College Park
  • Leading development of next-generation autonomy systems and establishing core robotics infrastructure.

Robotics Software Engineer

  • Designed full-stack autonomy pipelines for Autonomous UGVs using C++, Python, ROS2, and CUDA in Unity and Isaac Sim simulations; integrated LiDAR and visual odometry modules with 3D ICP scan matching and ORB feature tracking.
  • Built and scaled reinforcement learning–based terrain navigation models, achieving 90% navigation success across dynamic terrains with automated CI/CD pipelines.
  • Extended autonomy stack with DWA-based local planner and LLM-powered natural language interface for voice-commanded mission execution.
  • Developed CLIP-based vision-model for object damage assessment, detecting surface deformation, cracks, and structural anomalies.

Computer Vision & SLAM Engineer

Onki Robotics, New York City
  • Optimized SLAM and localization pipelines for UGVs on Jetson platforms using ROS2, reducing pose update latency by 25%.
  • Enhanced environmental perception with sensor-fusion of LiDAR and stereo camera data, generating denser occupancy maps.
  • Developed ML-based human feature extraction and tracking models for robust person-following through crowds.

Research Assistant

University of Maryland, College Park
  • Integrated multi-beam sonar and depth sensors in simulations for UUVs, enhancing decision-making capabilities.
  • Improved environmental perception and navigation precision through advanced sensor fusion techniques.
  • Developed control algorithms to optimize UUV path planning and maneuverability in complex underwater environments.

Research Assistant

University of Toronto
  • Performed in-depth analysis of a large-scale database with over 1 million patient records daily, uncovering critical trends.
  • Developed algorithms for data cleaning using Isolation Forest, and created real-time tracking dashboards.

Web Developer

N. Thakkar & Associates
  • Optimized web application throughput by 10% using load balancer and code optimization.
  • Created user interface designs using wireframes and mockups.