Learn Robotics
Module: See The World

LiDAR and Point Clouds

How LiDAR works, the difference between 2D and 3D LiDAR, point cloud structure, and what range and intensity tell you.

10 min read

LiDAR and Point Clouds

LiDAR (Light Detection and Ranging) is the "radar for robots." It shoots laser beams at the world and measures how long they take to bounce back. The result: direct distance measurements — no guessing, no calibration headaches, just raw 3D geometry.

How LiDAR Works

The core principle:

  1. Emit a laser pulse toward a target
  2. Wait for the reflection to return
  3. Measure the time (usually nanoseconds)
  4. Calculate distance using the speed of light:
distance = (speed_of_light * time) / 2

(Divided by 2 because the light travels to the target and back.)

Since light moves at ~300,000 km/s, even nanosecond precision gives millimeter-accurate measurements.

Note

LiDAR is an active sensor — it emits its own light. Cameras are passive — they rely on ambient light. This means LiDAR works perfectly in total darkness, but struggles with transparent surfaces (glass, water) that don't reflect lasers well.

2D vs. 3D LiDAR

2D LiDAR (Laser Scanners)

  • Spins in a single plane (usually horizontal)
  • Outputs a 1D array of distances — one per angle
  • Common range: 270° coverage, 0.25° angular resolution (~1080 points per scan)
  • Used for: indoor navigation, obstacle avoidance, 2D mapping
2D LiDAR Scan Structure

3D LiDAR (Spinning/Solid-State)

  • Multiple laser beams in different vertical angles
  • Outputs a 2D array or point cloud — (x, y, z) coordinates
  • Common configurations: 16, 32, 64, or 128 beams
  • Used for: 3D mapping, autonomous driving, aerial surveying
3D Point Cloud Structure

Point Clouds: The Data Structure

A point cloud is just a collection of 3D points. Each point has:

  • Position: (x, y, z) in meters (or whatever unit)
  • Intensity: How much laser light reflected (0-255 or 0-65535)
  • Optional: color (if fused with camera), timestamp, beam ID

Organized vs. Unorganized

  • Organized: Points form a grid (like image pixels) — fast to index by row/column
  • Unorganized: Points in a flat list — more flexible but slower to query
Tip

Think of an organized point cloud as a "depth image" — each pixel stores (x, y, z) instead of (r, g, b). This makes it easy to find neighboring points, which is critical for surface normal estimation and segmentation.

Range and Intensity

Range (Distance)

  • Maximum range: 10m (indoor LiDAR) to 200m+ (automotive LiDAR)
  • Invalid readings: Returned as inf, NaN, or a special value (often 0 or max_range + 1)
  • Failure cases: Transparent objects, very dark/absorbing surfaces, max range exceeded

Intensity (Reflectivity)

Measures how much laser light bounces back. High intensity means:

  • Bright/reflective surfaces (white walls, metal, retroreflectors)
  • Close objects (more photons return)

Low intensity means:

  • Dark/absorbing surfaces (black rubber, asphalt)
  • Far objects (signal weakens with distance)
  • Glancing angles (laser hits at a steep angle, less reflection)
Filtering Point Clouds

Common LiDAR Use Cases

ApplicationLiDAR TypeWhy
Indoor navigation2D (single plane)Cheap, fast, perfect for flat environments
Outdoor mapping3D (64+ beams)Captures terrain, trees, buildings
Warehouse robots2DPallets and obstacles are mostly at one height
Self-driving cars3D (128 beams)Need to see pedestrians, cars, road geometry
Drones3D (lightweight)Terrain mapping, collision avoidance

What's Next?

LiDAR gives us 3D points, but cameras give us rich visual information. The next lesson explores depth perception — how we can get 3D distance data from cameras alone, and when to choose LiDAR vs. vision-based depth.

Got questions? Join the community

Discuss this lesson, get help, and connect with other learners on Discord.

Join Discord

Discussion

Sign in to join the discussion.