Unitree G1's SLAM Implementation: A Technical Deep-Dive into Autonomous Navigation

The Unitree G1 humanoid robot demonstrates impressive SLAM capabilities using a Livox Mid360 LiDAR unit, despite some networking quirks and implementation challenges. Here’s what we learned from building out its navigation stack.
Hardware Configuration
The G1’s sensor suite consists of three primary components:
- Livox Mid360 LiDAR (mounted upside-down under the head)
- RGB camera (front-facing, angled down)
- Depth camera (aligned with RGB)
Processing Architecture
The system runs on three interconnected computers:
| Component | Role |
|---|---|
| Jetson Board | Main controller & sensor hub |
| LiDAR Unit | Point cloud processing |
| Control Unit | Motor control & balance |
SLAM Implementation
Rather than wrestling with ROS’s notorious setup complexities, we opted for a lightweight Python-based approach using Kiss ICP. This decision prioritized development speed over some advanced features like loop closure.
The SLAM stack consists of:
- Kiss ICP for point cloud registration
- Custom occupancy grid generator
- Basic A* pathfinding implementation
Network Architecture
One of the more challenging aspects was managing the distributed computing setup. The network configuration requires specific IP assignments for each component, making wireless operation non-trivial.
Current Limitations
Several technical hurdles remain:
- IMU drift affects long-term mapping accuracy
- No loop closure implementation yet
- Occupancy grid resolution needs tuning
- Pathfinding occasionally generates invalid routes
Camera Placement Considerations
The downward-angled camera position, while initially counterintuitive, makes sense for common manipulation tasks. However, this creates interesting challenges for vision-based manipulation, particularly when working with objects at different heights.
Future Development
The next phase focuses on:
- Implementing hand and arm control
- Integrating visual object recognition
- Improving path planning algorithms
- Adding advanced vision processing capabilities
SDK Status
The official Unitree Python SDK provides basic functionality, but custom modifications will be necessary for advanced manipulation tasks. We’re considering publishing our enhanced version once the codebase stabilizes.