Learn Robotics
All Posts
Tutorial

Understanding PID Control: The Algorithm That Runs Robotics

A beginner-friendly deep dive into PID control — what it is, how each term works, and how to tune it for your robot.

Learn Robotics Team7 min read
pidcontroltutorialbeginner

Understanding PID Control: The Algorithm That Runs Robotics

If you have ever driven a car, adjusted a thermostat, or balanced on one foot, you have an intuitive grasp of PID control. It is one of the most widely used algorithms in engineering -- and in robotics, it is everywhere.

This post breaks down PID control from first principles. By the end, you will understand what each term does, why they matter, and how to tune a PID controller for your own robot.

The Core Problem

Every control problem starts with the same question: how do I make a system reach and hold a desired state?

Maybe you want a drone to hover at 10 meters. Maybe you want a robot arm to rotate to exactly 45 degrees. Maybe you want a mobile robot to drive in a straight line.

In all of these cases, you have a setpoint (the desired value) and a process variable (the actual measured value). The difference between them is the error:

error = setpoint - measured_value

The goal of a controller is to compute an output signal that drives the error to zero.

The Naive Approach: Bang-Bang Control

The simplest controller is binary: if the error is positive, apply full power forward. If negative, full power backward.

if error > 0:
    output = MAX_POWER
else:
    output = -MAX_POWER

This works, but it oscillates wildly around the setpoint. Your thermostat would blast the heater until the room is too hot, then blast the AC until it is too cold. Not great.

Proportional Control (The P in PID)

A smarter approach is to make the output proportional to the error. Big error? Big correction. Small error? Small correction.

output = Kp * error

Kp is the proportional gain -- a tuning parameter you choose. Think of it like a spring: the farther you pull it from equilibrium, the harder it pulls back.

Real-world analogy: Imagine steering a car toward the center of your lane. If you are far off center, you turn the wheel a lot. If you are almost centered, you make a tiny adjustment.

The problem with P-only control: It often settles with a persistent steady-state error. The output becomes too small to overcome friction or other disturbances before the error reaches zero. The system gets close to the setpoint but never quite arrives.

Integral Control (The I in PID)

The integral term fixes the steady-state error problem by accumulating error over time:

integral += error * dt
output = Kp * error + Ki * integral

If the error has been positive for a long time -- even if it is small -- the integral term grows until it is large enough to push the system the rest of the way to the setpoint.

Real-world analogy: Imagine pushing a heavy box across a floor. Proportional effort alone might not overcome static friction. But if you keep pushing (accumulating force over time), eventually the box moves.

The danger: If Ki is too high, or if the system takes a long time to respond, the integral term can grow excessively large. This is called integral windup, and it causes massive overshoot. Common mitigations include clamping the integral term to a maximum value or only enabling it when the error is small.

# Anti-windup: clamp the integral term
integral = max(-MAX_INTEGRAL, min(MAX_INTEGRAL, integral))

Derivative Control (The D in PID)

The derivative term predicts the future by looking at how fast the error is changing:

derivative = (error - previous_error) / dt
output = Kp * error + Ki * integral + Kd * derivative

If the error is shrinking quickly, the derivative term reduces the output to prevent overshoot. If the error is growing, it increases the output to respond faster.

Real-world analogy: Cruise control on a highway. If your car is at 58 mph and accelerating toward your 60 mph setpoint, you should ease off the throttle before you hit 60 -- otherwise you will overshoot to 62. The derivative term is that anticipation.

The danger: Derivative control amplifies high-frequency noise. If your sensor readings are noisy (and they always are), the derivative term can produce wild output spikes. A common fix is to apply a low-pass filter to the derivative term or to compute the derivative of the measured value instead of the error.

The Full PID Controller

Putting it all together:

class PIDController:
    def __init__(self, kp, ki, kd, max_integral=100.0):
        self.kp = kp
        self.ki = ki
        self.kd = kd
        self.max_integral = max_integral
        self.integral = 0.0
        self.previous_error = 0.0

    def compute(self, setpoint, measured, dt):
        error = setpoint - measured

        # Proportional
        p_term = self.kp * error

        # Integral with anti-windup
        self.integral += error * dt
        self.integral = max(-self.max_integral,
                           min(self.max_integral, self.integral))
        i_term = self.ki * self.integral

        # Derivative
        derivative = (error - self.previous_error) / dt
        d_term = self.kd * derivative
        self.previous_error = error

        return p_term + i_term + d_term

Tuning Strategies

Choosing good values for Kp, Ki, and Kd is the hard part. Here are the most common approaches:

Manual Tuning

  1. Set Ki and Kd to zero.
  2. Increase Kp until the system oscillates steadily around the setpoint.
  3. Add Kd to dampen the oscillation.
  4. Add a small amount of Ki to eliminate any remaining steady-state error.

This is the most common approach for hobby robotics and works well for simple systems.

Ziegler-Nichols Method

  1. Set Ki and Kd to zero.
  2. Increase Kp until the system oscillates with a constant amplitude. Record this gain as Ku (ultimate gain) and the oscillation period as Tu.
  3. Use the Ziegler-Nichols table to compute gains:
ControllerKpKiKd
P only0.5 * Ku----
PI0.45 * Ku0.54 * Ku / Tu--
PID0.6 * Ku1.2 * Ku / Tu0.075 * Ku * Tu

This method gives you a reasonable starting point, but the resulting gains are often aggressive. Expect to reduce them by 20-50% for smoother performance.

Software-Assisted Tuning

Our platform includes a PID Tuner tool that lets you adjust gains with sliders and see the step response in real time. This is the fastest way to develop intuition for how each parameter affects behavior.

When PID Is Not Enough

PID control is powerful, but it has limits:

  • Highly nonlinear systems: PID assumes a roughly linear relationship between output and response. Systems with dead zones, saturation, or complex dynamics may need model-based controllers.
  • Multi-input, multi-output (MIMO) systems: A robot arm with 6 joints has coupled dynamics. Independent PID controllers on each joint can fight each other. State-space controllers or computed torque methods handle this better.
  • Systems requiring prediction: PID is reactive. Model Predictive Control (MPC) looks ahead and plans a sequence of actions, which is better for fast-changing environments.
  • Extremely high performance requirements: When you need near-optimal control, techniques like LQR (Linear Quadratic Regulator) provide mathematically optimal gains for linear systems.

That said, PID is the right first choice for the vast majority of robotics problems. Master it before moving on to more advanced methods.

Try It Yourself

Head to our Control Systems module or open the PID Tuner in the tools section. You can tune a PID controller for a simulated differential drive robot and watch it follow a path in real time.

Start with P-only control, observe the steady-state error, add I to fix it, watch the overshoot, then add D to tame it. There is no better way to learn PID than to feel the tradeoffs yourself.

Enjoyed this article?

Subscribe to get notified when we publish new content.

Subscribe