Sensor Fusion  ·  State Estimation  ·  Multi-Object Tracking

Tracking Multiple Drones at Once
Using Radar + Camera

Radar sees distance and direction but generates false signals constantly. Camera sees shape but goes blind behind obstacles. This project builds the math to combine both — keeping a lock on every drone in the airspace, even when they fly directly through each other's path.

View on GitHub Watch Demo
0
Times system confused two drones
even at closest approach (~9 m apart)
2.57 m
Average position error
across all 3 targets, 100 timesteps
3 / 3
Drones successfully tracked
despite radar clutter every frame
25
Automated tests passing
covering math, lifecycle, end-to-end

What this project is

The core challenge, explained without jargon.

01

The problem: sensors lie

Radar reports where something might be — but with noise, missed detections, and dozens of phantom signals (called clutter) every second. Camera gives a cleaner picture but goes dark the moment anything blocks its view. Neither sensor alone is reliable enough to stake a tracking decision on.

02

The solution: weighted combination

Instead of trusting one sensor, the system maintains a probability distribution — a best guess of where each drone is, plus a measure of how confident that guess is. Every new sensor reading shifts the distribution. The less noisy the reading, the more it shifts things. This is the Kalman filter idea, dating to Apollo-era navigation.

03

The hard part: crossing paths

When two drones fly close together, the system receives signals from both but doesn't know which signal came from which drone. Assign them wrong once and both tracks are corrupted. FusionTrack uses an optimization algorithm (Hungarian method) to find the globally best assignment — not just the closest match — avoiding identity swaps entirely.


Live Tracking Demo

Three drones, 100 timesteps, constant clutter. Watch the system track each drone from birth to crossing and back out — never losing identity.

Multi-object tracking animation showing 3 crossing targets with uncertainty ellipses, fading trails, and clutter measurements
Red × — raw radar signal (could be real or clutter)  Filled triangle — confirmed drone track  Hollow circle — newly detected, not yet confirmed  Ellipse — 95% confidence region for position
What to watch at the crossing (roughly halfway through): Two drones fly within ~9 meters of each other — so close that each is inside the other's expected detection zone. The system must decide, every frame, which radar blip belongs to which drone. It never gets it wrong. The fading colored trails show where each drone has been; the ellipses shrink as the system gets more confident in each position.

How it works

Each timestep, the tracker runs five steps — from prediction through lifecycle management.

Plain English

Every frame: predict where each drone probably moved based on its last known velocity → check whether each incoming signal is plausibly from a known drone or random noise → match signals to drones optimally (one signal per drone, one drone per signal) → update each drone's estimated position using the matched signal → manage new arrivals and drones that have gone quiet.

1
Predict
Where should each drone be now?
CV model, F·P·Fᵀ + Q
2
Gate
Is this signal close enough to be real?
Mahalanobis d² < χ²₉₉
3
Assign
Which signal belongs to which drone?
Hungarian O(n³)
4
Update
Correct the position estimate
EKF polar h(x)=[r,θ]
5
Manage
Handle new targets and lost ones
Birth / confirm / delete
📡

Fusing radar and camera

Radar reports distance and angle from the sensor. Camera reports pixel coordinates. These are measured in completely different units, with different noise characteristics. The filter converts both into a single best-estimate position — weighting each sensor by how trustworthy it currently is.

Technical: native polar measurement model h(x) = [√(x²+y²), atan2(y,x)] with analytic 2×4 Jacobian. Angle-normalised innovation prevents ±π wraparound. Camera uses a linear H after pixel→world scaling.

🎯

Filtering out false alarms

Radar generates phantom signals constantly — birds, reflections, electrical noise. Before any matching happens, each incoming signal is tested: is it close enough to a known drone's predicted position, given how uncertain that prediction is? Signals that fail this test are discarded before they can corrupt any tracks.

Technical: Mahalanobis distance² in polar measurement space tested against χ²(2 DOF, 99%) gate before Hungarian assignment. Infeasible pairs receive sentinel cost 10⁹.

🔀

Solving the assignment problem

When multiple drones are visible simultaneously, each sensor signal must be claimed by exactly one drone. A naïve approach — assign each signal to its nearest drone in sequence — can cascade errors when targets are close. Instead, the system finds the globally optimal assignment: minimize the total cost across all pairings at once.

Technical: scipy.optimize.linear_sum_assignment (Jonker-Volgenant, O(n³)). Order-independent global optimum vs. greedy nearest-neighbour.

🛤️

Managing track confidence

A single radar blip shouldn't be enough to declare a drone exists — it's probably clutter. A drone that stops producing signals for several frames probably left the area. The tracker requires two consecutive detections before confirming a track, and deletes it after three missed frames. This keeps the track list clean without human supervision.

Technical: TENTATIVE → CONFIRMED (hits ≥ 2) → DELETED (misses ≥ 3). Stale tentative tracks pruned after 3 frames.


Two filter variants

The simpler version converts radar readings to x/y coordinates first, then filters. The advanced version works directly in radar's native units — more accurate at long range where the conversion introduces distortion.

Variant Radar handling Accuracy Best for
KFTracker — linear Convert distance/angle → x/y first, then filter Fixed error budget approx Short range, quick prototype
EKFTracker — extended Filter directly in distance/angle space; no conversion Physically derived error model exact Long range — conversion error grows with distance

Results

Simulated scenario: 3 drones flying straight-line paths that converge, with radar clutter generated randomly every frame.

Bottom line

When two drones flew within 9 meters of each other — so close each was inside the other's detection zone — the system correctly tracked both without ever swapping their identities. Position estimates stayed within 2.57 meters of true position on average, which matches what you'd expect given the radar's physical noise level at that range.

Tracking performance

Identity swaps at crossing0 = never confused which drone was which 0
Drones correctly trackedall 3 confirmed from start to finish 3 / 3
Average position errorlong-lived confirmed tracks 2.57 m
Per-drone erroreach target independently 2.51 / 2.56 / 2.62 m

Sensor noise (simulation inputs)

Radar range noisehow far off distance readings are, 1σ 3 m
Radar angle noisehow far off bearing readings are, 1σ 0.5°
Expected lateral error at 270 mangle noise × range = cross-range blur ≈ 2.3 m
Camera noisepixel jitter converted to world metres 8 px ≈ 4 m

Run it yourself

# Install python -m venv .venv && source .venv/bin/activate pip install -r requirements.txt # Single-drone EKF fusion demo (plots trajectory + error over time) python -m src.fusion # Multi-drone tracker — prints metrics, shows plot, saves mot_demo.gif python -m src.mot # 25 automated tests pytest tests/ -q