Open-Source Autonomous Mobile Robot

Welcome to my webpage providing information and instructions for the open-source autonomous mobile robot developed by Matthew Sato at Stanford University.

← Back to Home Mobile Robot Instruction Guide

Project Overview

The autonomous mobile robot designed to demonstrate affordable (~$1200) autonomous mobile robotic technology and artificial intelligence. This project showcases the integration of multiple sensors, navigation algorithms, agent-agent communications, computer vision, and real-time decision-making capabilities in a mobile platform.

The robot is capable of autonomous navigation, obstacle avoidance, environment mapping, object recognition, voice recognition, and communications. The project provides a software stack for mobile robot operations.

This open-source project aims to make advanced robotics concepts accessible to students, researchers, and hobbyists while pushing the boundaries of what's possible with affordable hardware and innovative software solutions.

Please watch the following video for a demonstration of the mobile robot capabilities:

Mobile Robot Instructions

Please visit the instructions page for detailed manufacturing, setup, and usage instructions.

Robot Components

The mobile robot is built using a combination of sensors, actuators, and computing platforms.

Computing Hardware

The NVIDIA Jetson Orin Nano Developer Kit (8GB) serves as the primary computing unit, running ROS and coordinating all robot subsystems.

Vision Systems

An Astra Pro Plus RGB-D camera is used for object detection. An Arducam USB camera is used for facial recognition.

LiDAR Sensor

The RPLIDAR A2M12 360-degree 2D LiDAR is used for mapping and localization.

Motor System

A differential drive system with encoders is used for precise movement control with wheel odometry for accurate positioning. The motors are controlled using a custom motor driver board.

IMU

6-axis inertial measurement unit for orientation tracking.

Microphones

Array of MEMS microphones for voice recognition and human interaction.

Touch Screen Display

Interactive touch screen display for user interface and control.

GitHub Repositories

The MattBot project is fully open-source. The following repositories contain the CAD, electrical design, and software components for constructing and operating the mobile robot.

Robot

Contains the mechanical design, electrical design, and microcontroller software for the robot construction.

View Repository

DDS Robot Platform

Contains the user GUI and user communications code

View Repository

Decentralized Camera Sensor

Code for object detection of a smart camera with a Raspberry Pi and decentralized communication.

View Repository

Mattbot Bringup

Bringup files and communication with the low level microcontroller.

View Repository

Mattbot DDS

Data Distribution Service implementation for agent-agent communication

View Repository

Mattbot Image Detection

Code for image processing and object detection using computer vision techniques.

View Repository

Mattbot Navigation

Code for robot navigation, path planning, and obstacle avoidance.

View Repository

Mattbot Record

Code for processing audio signals.

View Repository

Mattbot Display

Code for the onboard touch screen display.

View Repository