Features
From intuitive Web UIs to advanced physical controls, operators can command RoboDetection using various input methods including keyboards, digital joysticks.
Control and Command for Robot Dog
RoboDetection receives real-time video streams from the robot, processes them using the YOLO object detection algorithm, and outputs video streams with humans highlighted by bounding boxes, allowing operators to identify and track individuals within the robot’s field of vision effectively. Operators can select a target individual with a simple mouse click within the Web UI, after which the system autonomously directs the robot to follow the selected individual, streamlining tracking in complex environments like debris fields or caves.
Human Detection And Tracking
RoboDetection employs advanced sensors to generate real-time 2D occupancy grid maps, providing essential data for pre-mission planning. These maps are crucial for strategic deployment in uncharted or dangerous areas.
Real Time Area Mapping
How It Works
-
Robot Dog Simulation Sends VideoOur Gazebo-simulated robot dog sends depth-inclusive video to our server through Husarnet VPN, ensuring secure and reliable data transmission.
-
Server Side Processing, YOLO, 2D MappingThe server processes the video stream with the YOLO object detection algorithm, annotating objects with bounding boxes and generating a 2D occupancy grid map using depth information from laser sensors.
-
Web UI Control, CommandThe Web UI displays the processed video, objects list, real-time occupancy grid map, and includes controls for manual command input. Operators can select and track humans from the video feed for enhanced operational efficiency.
-
Human TrackingUpon selection of a target, the server calculates and updates the navigation path, directing the robot dog to track movements dynamically while responding to manual commands as necessary.
-
Robot Dog in SimulatorCommands from the server are relayed to the robot dog within the Gazebo simulator, prompting movement towards the designated location.
Demonstration
Tech Stack
Our backend infrastructure utilizes Flask, ROS, and YOLOv7 with real-time data handling via socket.io. For simulation, we leverage Gazebo and RViz. The frontend is built with React, known for its efficiency and ease of use. Our development process is supported by GitLab for version control.
About Us
ROBODETECTION team members are a group of senior students who are deeply concerned about their country's earthquake risk. By ROBODETECTION system they aim to help rescue teams in their rescue operation efforts.
Contact Information
Email: robodetection.metu@gmail.com
Address: Orta Doğu Teknik Üniversitesi, Üniversiteler Mahallesi, Dumlupınar Bulvarı No:1 06800 Çankaya
