Gurugram, Haryana, India
Finance & Banking
Other
Quest Global

Overview
Job Requirements
Work Experience
- Implement advanced perception algorithms for autonomous vehicles using LiDAR, cameras, radar, and GNSS.
- Develop and optimize sensor fusion techniques to combine data from multiple sensors, improving the accuracy and reliability of perception systems.
- Create algorithms for object detection, tracking, semantic segmentation, and classification from 3D point clouds (LiDAR) and camera data.
- Work on Simultaneous Localization and Mapping (SLAM) algorithms, including Graph SLAM, LIO-SAM, and visual-inertial SLAM.
- Develop sensor calibration techniques (intrinsic and extrinsic) and coordinate transformations between sensors.
- Contribute to the development of robust motion planning and navigation systems.
- Work with software stacks like ROS2 (Robot Operating System 2) for integration and deployment of perception algorithms.
- Develop, test, and deploy machine learning models for perception tasks (e.g., object detection, segmentation).
- Collaborate with cross-functional teams, including software engineers, data scientists, and hardware teams, to deliver end-to-end solutions.
- Stay up-to-date with industry trends, research papers, and emerging technologies to innovate and improve perception systems.
Work Experience
- Proven experience with perception algorithms for autonomous systems, particularly in the areas of LiDAR, camera, radar, GNSS, or other sensor modalities.
- Understanding of LiDAR technology, point cloud data structures, and processing techniques
- Proficiency in programming languages such as C/C++, Python, or similar.
- In-depth knowledge of sensor fusion techniques (Kalman Filters, Extended Kalman Filters, Unscented Kalman Filters, Particle Filters) for combining data from LiDAR, camera, radar, and GNSS.
- Solid background in computer vision techniques (e.g., object detection, semantic segmentation, feature extraction).
- Experience in deep learning frameworks such as TensorFlow or PyTorch for object detection and segmentation tasks.
- Knowledge of SLAM (Simultaneous Localization and Mapping) and localization algorithms, including GraphSLAM, LIO-SAM, GTSAM, ORB-SLAM, and related technologies.
- Familiarity with ROS2 for the development of perception-based robotic systems and autonomous vehicles.
- Experience with multi-object tracking algorithms such as DeepSORT, SORT, and Kalman Filter-based tracking.
- Strong understanding of real-time systems and optimizing for low-latency processing.
- Proficiency in sensor calibration techniques and algorithms for both intrinsic and extrinsic calibration of LiDAR, cameras, radar, and GNSS.
- Hands-on experience with PCL (Point Cloud Library) and OpenCV for 3D point cloud and image processing.
- Experience with parallel computing and optimizing algorithms for real-time performance (e.g., CUDA, OpenCL).
Experience with object detection models such as YOLO, Faster R-CNN, SSD, or similar.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in