SLAM Point Cloud & Camera Trajectory Visualization
An interactive 3D visualization of Visual SLAM (Simultaneous Localization and Mapping) output, showing how autonomous robots and self-driving cars build maps while tracking their location in real-time.
What is Visual SLAM?
Visual SLAM is a fundamental technology in robotics and autonomous vehicles that solves two problems simultaneously:
- Localization: Where am I?
- Mapping: What does my environment look like?
Using only camera images, SLAM algorithms:
- Extract visual features (corners, edges) from each frame
- Track features between consecutive frames
- Estimate camera motion from feature correspondences
- Triangulate 3D positions of observed features
- Optimize jointly camera poses and map points (bundle adjustment)
- Detect loop closures when revisiting known locations
Interactive Features
Real-time Playback
Watch as the camera moves through the environment while the map grows incrementally. The visualization shows:
- Point Cloud: 3D map points built from visual features
- Camera Trajectory: Estimated path of the camera
- Keyframes: Important frames stored by SLAM for optimization
Color Modes
Visualize the point cloud with different color mappings:
- Depth: Near points (warm colors) vs far points (cool colors)
- Age: Recently added (bright) vs old points (dim)
- Observations: Well-observed points (saturated) vs rarely seen (faded)
- RGB: True colors from camera images (if available)
Playback Controls
- Play/pause animation
- Adjust speed (0.25x to 5x)
- Timeline scrubber to jump to any frame
- Frame-by-frame stepping
- Toggle visibility of different layers
Data Loading
Load different SLAM data sources:
- Synthetic Demo: Pre-generated circular camera path with simulated points
- PLY Point Clouds: Load real SLAM point clouds from .ply files
- Performance Adaptive: Automatically optimizes for your device
Technical Details
Efficient Rendering
The visualization uses optimized Three.js techniques for smooth performance:
- THREE.Points with BufferGeometry for GPU-accelerated point cloud rendering
- Adaptive subsampling based on device performance tier (mobile vs desktop)
- Incremental updates as new frames are processed
- Vertex colors for efficient coloring without shader overhead
Device Adaptation
Performance is automatically optimized based on your device:
| Tier | Points | Subsample | Point Size |
|---|
| High | 100k | 1x | 0.05 |
| Medium | 50k | 2x | 0.04 |
| Low | 20k | 4x | 0.03 |
Point Clouds:
- PLY (Polygon File Format) - ASCII format
- Supports position (x, y, z) and color (red, green, blue) properties
- Automatically subsamples large datasets (>100k points)
Synthetic Demo:
- 200 frames of camera motion in circular path
- ~10,000 map points with realistic distribution
- Keyframes marked every 10th frame
SLAM Concepts Demonstrated
This visualization shows the core output of SLAM systems:
Feature Triangulation
How 2D image points become 3D map points through geometric triangulation from multiple camera views.
Camera Pose Estimation
Computing camera position and orientation by matching observed features to the map.
Keyframe Selection
Which frames are stored for optimization - typically frames with significant camera motion or new observations.
Incremental Mapping
Building the map frame-by-frame as the camera explores new areas.
Real-World Applications
This visualization demonstrates technology used in:
- Autonomous Vehicles: Self-driving cars mapping roads while navigating
- Drones: UAVs creating maps for inspection or delivery
- Mobile Robots: Warehouse robots, vacuum cleaners, delivery bots
- AR/VR: Spatial tracking for augmented reality devices
- Space Exploration: NASA's Astrobee robots on the ISS
Connection to My Work
This project directly extends my Visual SLAM project (ORB-SLAM2 in ROS) and relates to Astrobee (ISS navigation):
Visual SLAM Project
- Implements ORB-SLAM2 algorithm with GPU acceleration
- Runs in Robot Operating System (ROS)
- Tests in ray-traced simulation environments
- This visualization shows what the algorithm produces
Astrobee Project
- Free-flying robots on International Space Station
- Use visual-inertial SLAM for localization and mapping
- Navigation cameras build 3D maps of ISS modules
- This visualization helps understand how they perceive their environment
Dataset Recommendations
Want to try real SLAM data? Here are recommended datasets:
TUM RGB-D Benchmark
Desktop-scale indoor environments:
KITTI Vision Benchmark
Outdoor driving sequences:
EuRoC MAV Dataset
Drone flights with precision ground truth:
Usage Tips
Loading Point Clouds
- Click "Load PLY File" button in the Data folder
- Select a .ply file from your computer
- The point cloud will replace the synthetic demo
- Use color modes to explore different visualizations
Understanding Color Modes
- Depth: See how far points are from the camera (red=near, blue=far)
- Age: Understand when points were first observed (bright=new, dim=old)
- Observations: Identify well-tracked features (saturated=reliable)
- RGB: View natural colors if your dataset includes them
- Use subsample control to reduce point count if experiencing lag
- Lower point size for cleaner visualization of dense clouds
- Try different playback speeds to see map building at different rates
Technical Stack
- React Three Fiber: Declarative 3D rendering in React
- Three.js: WebGL graphics engine
- Leva: Interactive control panel
- TypeScript: Type-safe implementation
- BufferGeometry: Efficient GPU memory management
Future Enhancements
Planned features for future versions:
- Camera Frustum: Visualize the camera's field of view as it moves
- Loop Closures: Show when the algorithm recognizes revisited locations
- Statistics Panel: Real-time tracking status and metrics
- TUM/KITTI Loaders: Load trajectory files from standard datasets
- Ground Truth Overlay: Compare estimated vs actual camera path
- Export Tools: Save processed point clouds or trajectories
References
SLAM Algorithms
Point Cloud Processing
SLAM Visualization
- RViz - ROS visualization tool
- Pangolin - Lightweight 3D visualization
- Meshlab - Point cloud viewer and processor
This visualization helps bridge the gap between abstract SLAM algorithms and their concrete 3D output, making it easier to understand how robots perceive and navigate their world.