ROBODETECTION
Advanced system for human detection, tracking, and mapping. Better, safer, easier. Revolutionizing rescue operations.

The RoboDetection project is engineered to support rescue teams in coordinating operations within disaster zones. Utilizing state-of-the-art technology, RoboDetection ensures effective and efficient rescue missions. With ROBODETECTION, rescue operations become quicker, more dependable, and safer for both rescuers and survivors.

Learn More

Features

Feature 4

From intuitive Web UIs to advanced physical controls, operators can command RoboDetection using various input methods including keyboards, digital joysticks.

Control and Command for Robot Dog

Feature 1

RoboDetection receives real-time video streams from the robot, processes them using the YOLO object detection algorithm, and outputs video streams with humans highlighted by bounding boxes, allowing operators to identify and track individuals within the robot’s field of vision effectively. Operators can select a target individual with a simple mouse click within the Web UI, after which the system autonomously directs the robot to follow the selected individual, streamlining tracking in complex environments like debris fields or caves.

Human Detection And Tracking

Feature 3

RoboDetection employs advanced sensors to generate real-time 2D occupancy grid maps, providing essential data for pre-mission planning. These maps are crucial for strategic deployment in uncharted or dangerous areas.

Real Time Area Mapping

How It Works

  1. Robot Dog Simulation Sends Video
    Our Gazebo-simulated robot dog sends depth-inclusive video to our server through Husarnet VPN, ensuring secure and reliable data transmission.
  2. Server Side Processing, YOLO, 2D Mapping
    The server processes the video stream with the YOLO object detection algorithm, annotating objects with bounding boxes and generating a 2D occupancy grid map using depth information from laser sensors.
  3. Web UI Control, Command
    The Web UI displays the processed video, objects list, real-time occupancy grid map, and includes controls for manual command input. Operators can select and track humans from the video feed for enhanced operational efficiency.
  4. Human Tracking
    Upon selection of a target, the server calculates and updates the navigation path, directing the robot dog to track movements dynamically while responding to manual commands as necessary.
  5. Robot Dog in Simulator
    Commands from the server are relayed to the robot dog within the Gazebo simulator, prompting movement towards the designated location.

Demonstration

Tech Stack

Our backend infrastructure utilizes Flask, ROS, and YOLOv7 with real-time data handling via socket.io. For simulation, we leverage Gazebo and RViz. The frontend is built with React, known for its efficiency and ease of use. Our development process is supported by GitLab for version control.

React
React
ROS
ROS
Flask
Flask
Gazebo
Gazebo
GitLab
Git/GitLab

About Us

ROBODETECTION team members are a group of senior students who are deeply concerned about their country's earthquake risk. By ROBODETECTION system they aim to help rescue teams in their rescue operation efforts.

Barış Sarper Tezcan

Barış Sarper Tezcan

Project Manager / The Boss

Furkan Genç

Furkan Genç

Developer

Serhat Andıç

Serhat Andıç

Developer

İsmail Karabaş

İsmail Karabaş

Developer

Hikmet Türkan

Hikmet Türkan

Developer

Emre Akbaş

Assoc. Prof. Dr. Emre Akbaş

Project Supervisor and Mentor

Contact Information

Address: Orta Doğu Teknik Üniversitesi, Üniversiteler Mahallesi, Dumlupınar Bulvarı No:1 06800 Çankaya

Sponsor

Sponsor