Swayansaasita is a research and development initiative in Autonomous Vehicles (AV) with focus on challanges in developing countries like India. The project also focuses on design and development of ADAS systems for 2-wheeler vehicles.
Download Dataset Cite this project.
The availability of high data is the key element for research and development of intelligent systems for autonomous driving. Such data also helps in designs and development of simulations and models for studying driving dynamics - from the drivers PoV and also helps in understanding the dynamics of other elements on the road during driving such as pedestrians and other vehicles.
We are currently in the process of collecting, post processing and annotating the dataset. Through this project, we are releasing the biggest available multi-modal dataset for research and development in autonomous driving and traffic behavior studies. The following key objectives are the focus of Swayansaasita.
Largest Multimodal Dataset
Largest multimodal dataset from real world including 360 degree point clouds, multiple IMU sensors, GPS.
Annotated 3D data for ML.
Annotated data including annotated 3D pointclouds, IMU Events and Images for ML applications.
Our software.
We develop simulation environment cum UI to interact with the dataset and run experiments.
We train predictive ML models.
We experiment with ML algorithms to devleop Advanced Drivers Assistens Systems.
Technical writeups on the technologies involved in this project.
Stereovision cameras reconstruct 3D scenes by triangulating corresponding points from two or more viewpoints. This article explains passive and active stereo modalities, epipolar geometry, calibration, disparity estimation, and error sources. It compares stereovision with ToF and structured light, and maps applications across robotics, AR/VR, automotive, industrial metrology, and healthcare, with practical design notes and references for reproducible deployment.
Read blogComputer Vision enables machines to interpret and analyze visual data. This article explains fundamental principles, core techniques such as image processing, feature extraction, and deep learning, along with calibration and error sources. It compares computer vision with human vision, and maps applications across robotics, AR/VR, automotive, healthcare, and industry, with practical design notes and references for reproducible deployment.
Read blogThe Robot Operating System (ROS) is an open-source middleware framework that simplifies robotics software development. This article explains ROS architecture, communication models, tools, and ecosystem. It compares ROS 1 and ROS 2, highlights benefits and limitations, and maps applications across robotics, automation, healthcare, automotive, and industry, with practical design notes and references for reproducible deployment.
Read blogPoint clouds are collections of 3D data points representing surfaces and environments. This article explains acquisition methods, processing challenges, and algorithms for segmentation, classification, and reconstruction. It compares point clouds with other 3D data formats and maps applications across robotics, autonomous driving, AR/VR, healthcare, and industry, with practical design notes and references for reproducible deployment.
Read blogGlobal Navigation Satellite Systems (GNSS) provide worldwide positioning, while Real-Time Kinematic (RTK) enhances accuracy to centimeter levels. This article explains GNSS fundamentals, RTK working principles, error corrections, and compares them with DGPS. It maps applications across surveying, agriculture, robotics, UAVs, and autonomous navigation, with design considerations and references for reproducible deployment.
Read blogComputer vision enables autonomous vehicles to perceive and interpret their surroundings. This article explains core principles, algorithms for detection and tracking, sensor fusion, and challenges such as weather and occlusion. It compares computer vision with LiDAR and radar, and maps applications across navigation, safety, and industry, with design considerations and references for reproducible deployment.
Read blogHardware-software integration bridges embedded systems and applications. This article explains integration using microcontrollers (MCUs), development boards, and FPGAs. It covers workflows, toolchains, and design considerations, compares architectures, and maps applications across robotics, IoT, automotive, and industry. Practical notes highlight debugging, scalability, and reproducibility, with references for engineers seeking robust, real-world deployment.
Read blogAutonomous vehicles rely on multimodal sensors, data fusion, and machine learning for perception and decision making. This article elaborates the complete chain: sensor acquisition, preprocessing, fusion, prediction, and control. It emphasizes machine learning techniques for trajectory prediction, risk assessment, and decision making, mapping applications across navigation, safety, and industry, with design considerations and references for reproducible deployment.
Read blogSwayansaasita: Phase-I is a research and development initiative by the Subhankar Mishra Lab, funded and supported by La Fondation Dassault Systèmes, India.
Please fill the correspondance form attached below to avail the full dataset. Preview of the dataset is available here
The large amount of data captured for the IndiGo dataset is constantly being updated as Swayansaasita team finishes post processing of individual recording sessions.
Terms of Usage here
BibTex citation for this project is attached below. The contents of this project are availabe for research purposes and can be used with due citation to this project.
1. @misc{jyothish_smishra_2024,
title = {Swayansaasita - Autonomous Two-Wheeler Driving - Phase 1},
author = {Jyothish, K.J., Keshri, S., Vishwakarma, A., and Mishra, S.},
year = {2024},
howpublished = {\url{https://smlab.niser.ac.in/project/swayansaasita/}},
note = {Accessed: Dec 08, 2025}
}
2. @article{Jyothish2025,
title = {Multimodal two-wheeler driving dataset for autonomous driving applications},
volume = {1},
ISSN = {3059-3204},
url = {http://dx.doi.org/10.1007/s44430-025-00013-1},
DOI = {10.1007/s44430-025-00013-1},
number = {1},
journal = {Discover Robotics},
publisher = {Springer Science and Business Media LLC},
author = {Jyothish, Kumar J. and Keshri, Shriman and Vishwakarma, Anubhav and Mishra, Subhankar},
year = {2025},
month = dec
}