# Dynamic Policy Comparison Brief

## Scenario

A mixed fleet of long-endurance sentinels and faster scout drones must monitor a border corridor, protect a logistics hub, and respond to time-varying surveillance spikes.

## Ranking

| strategy | mission_fit_score | final_weighted_coverage | avg_task_service_rate | task_completion_rate | mean_task_response_time | coverage_efficiency |
| --- | --- | --- | --- | --- | --- | --- |
| priority_patrol | 0.680 | 0.562 | 0.384 | 1.000 | 2.150 | 0.000 |
| greedy_patrol | 0.620 | 0.496 | 0.381 | 0.750 | 2.467 | 0.000 |
| patrol | 0.516 | 0.834 | 0.086 | 0.850 | 6.500 | 0.001 |
| static | 0.491 | 0.107 | 0.106 | 0.750 | 0.000 | 0.000 |

## Key Takeaways

- Best overall policy: `priority_patrol` with mission-fit score `0.680`.
- Static basing keeps perfect persistence over what it already watches, but it under-serves dynamic tasks once demand shifts.
- Random patrol broadens reach, but it is less disciplined about task response than the planner-style baselines.
- `greedy_patrol` provides the stronger non-random baseline by explicitly assigning drones to targets each step.
- `priority_patrol` improves average task service from `0.086` to `0.384` relative to random patrol.
- `priority_patrol` improves average task service from `0.381` to `0.384` relative to the greedy planner.
- The heterogeneous fleet matters: slower long-endurance sentinels anchor persistent regions while faster scouts absorb the time-varying task spikes.

## Why This Helps The Portfolio

- It moves the project beyond a static trade study into dynamic decision support.
- It shows role relevance for operations analysis, data science, and data engineering at the same time.
- It creates an offline evaluation harness that could later compare optimization or learned routing policies.
