Monitor Your Object Detection Model
Track performance, collect feedback, and continuously improve your model.
Monitoring Dashboard
Performance Alerts
Set Up Alerts
# Create alert for low confidence detections
alert = client.create_alert(
model_id=model.id,
name="Low Confidence Alert",
condition={
"metric": "avg_confidence",
"operator": "lt",
"threshold": 0.7,
"window": "1h"
},
notify=["email:team@company.com", "slack:#ml-alerts"]
)
# Alert for high latency
latency_alert = client.create_alert(
model_id=model.id,
name="High Latency Alert",
condition={
"metric": "p99_latency_ms",
"operator": "gt",
"threshold": 500,
"window": "15m"
}
)Collect Feedback
Mark Predictions as Correct/Incorrect
Feedback Types
| Type | Description |
|---|---|
correct | Detection was accurate |
wrong_label | Object found, but wrong class |
wrong_box | Right class, but box position off |
false_positive | Detection on background/wrong object |
missed_object | Object not detected (false negative) |
Data Engine
Automatically collect images that need review:
# Configure data engine
data_engine = client.configure_data_engine(
model_id=model.id,
config={
# Collect low-confidence predictions
"low_confidence": {
"enabled": True,
"threshold": 0.6,
"sample_rate": 0.3 # Collect 30%
},
# Collect edge cases
"edge_cases": {
"enabled": True,
"conditions": [
"no_detections", # Images with zero detections
"high_density", # Many overlapping objects
"unusual_size" # Very large/small objects
]
},
# Sample successful predictions
"random_sample": {
"enabled": True,
"rate": 0.05 # 5% of all predictions
}
},
target_dataset_id=dataset.id
)Drift Detection
Monitor for distribution shifts:
# Check for drift
drift_report = client.check_drift(
model_id=model.id,
reference_period="2024-01-01:2024-01-31",
current_period="2024-02-01:2024-02-28"
)
print(f"Class distribution drift: {drift_report['class_drift']:.3f}")
print(f"Confidence drift: {drift_report['confidence_drift']:.3f}")
print(f"Detection count drift: {drift_report['count_drift']:.3f}")
if drift_report['significant_drift']:
print("WARNING: Significant drift detected!")
print(f" Most affected classes: {drift_report['affected_classes']}")Continuous Improvement Loop
graph LR
A[Deploy Model] --> B[Monitor Predictions]
B --> C[Collect Feedback]
C --> D[Review & Label]
D --> E[Retrain Model]
E --> AImplement the Loop
# 1. Get items flagged by data engine
flagged_items = client.get_flagged_items(
dataset_id=dataset.id,
status="needs_review",
limit=100
)
# 2. Review and label (or send to human reviewers)
for item in flagged_items:
# Display for review...
pass
# 3. Create new dataset version
new_version = client.create_dataset_version(
dataset_id=dataset.id,
parent_version_id=version.id,
name="v2 - with feedback"
)
# 4. Retrain
job = client.create_training_job(
name="Vehicle Detection v2",
dataset_id=dataset.id,
version_id=new_version.id,
base_model_id=model.id # Fine-tune from previous model
)A/B Testing
Compare model versions in production:
# Create A/B test
ab_test = client.create_ab_test(
name="Detection Model Comparison",
model_a_id=model_v1.id, # Current production
model_b_id=model_v2.id, # New challenger
config={
"traffic_split": 0.1, # 10% to model B
"metrics": ["confidence", "latency", "feedback_score"],
"duration_days": 7
}
)
# Check results
results = client.get_ab_test_results(ab_test.id)
print(f"Model A: {results['model_a']['avg_confidence']:.3f} confidence")
print(f"Model B: {results['model_b']['avg_confidence']:.3f} confidence")
print(f"Statistical significance: {results['p_value']:.4f}")Best Practices
- Set baseline metrics - Know what “normal” looks like
- Alert on anomalies - Not just thresholds
- Review edge cases - Most learning comes from failures
- Track class distribution - Detect when new objects appear
- Automate feedback collection - Make it easy to flag issues
Complete!
You’ve completed the object detection guide. Your model is now:
- Trained on your data
- Deployed to production
- Being monitored and improved