
Welcome readers, the CV class is back in session! We’ve previously studied 30+ different computer vision models so far in my previous blog, each bringing their own unique strengths to the table from the rapid detection skills of YOLO to the transformative power of Vision Transformers (ViTs). Today, we’re introducing a new student to our classroom: RF-DETR. Read on to know everything about Roboflow’s RF-DETR and how it is bridging the speed and accuracy in object detection.
What is Roboflow’s RF-DETR?
RF-DETR is a real-time transformer-based object detection model that achieves over 60 mAP on the COCO dataset, showcasing an impressive accomplishment. Naturally, we’re curious: Will RF-DETR be able to match YOLO’s speed? Can it adapt to diverse tasks we encounter in the real world?
That’s what we’re here to explore. In this article, we’ll break down RF-DETR’s core features, its real-time capabilities, strong domain adaptability, and open-source availability and see how it performs alongside other models. Let’s dive in and see if this newcomer has what it takes to excel in real-world applications!
Why RF-DETR is a Game Changer?
- Outstanding performance on both COCO and RF100-VL benchmarks.
- Designed to handle both novel domains and high-speed environments, making it perfect for edge and low-latency applications.
- Top 2 in all categories when compared to real-time COCO SOTA transformer models (like D-FINE and LW-DETR) and SOTA YOLO CNN models (like YOLOv11 and YOLOv8).
Model Performance and New Benchmarks
Object detection models are increasingly challenged to prove their worth beyond just COCO – a dataset that, while historically critical, hasn’t been updated since 2017. As a result, many models show only marginal improvements on COCO and turn to other datasets (e.g., LVIS, Objects365) to demonstrate generalizability.
RF100-VL: Roboflow’s new benchmark that collects around 100 diverse datasets (aerial imagery, industrial inspections, etc) out of 500,000+ on Roboflow Universe. This benchmark emphasizes domain adaptability, a critical factor for real-world use cases where data can look drastically different from COCO’s common objects.
Why We Need RF100-VL?
- Real World Diversity: RF100-VL includes datasets covering scenarios like lab imaging, industrial inspection, and aerial photography to test how well models perform outside traditional benchmarks.
- Diverse Benchmarks: By standardizing the evaluation process, RF100-VL allows direct comparisons between different architectures, including transformer-based models and CNN-based YOLO variants.
- Adaptability Over Incremental Gains: With COCO saturating, domain adaptability becomes a top-tier consideration alongside latency and raw accuracy.
In the above table, we can see how RF-DETR stacks up against other real-time object detection models:
- COCO: RF-DETR’s base variant achieves 53.3 mAP, placing it on par with other real-time models.
- RF100-VL: RF-DETR outperforms other models (86.7 mAP), showing its exceptional domain adaptability.
- Speed: At 6.0 ms/img on a T4 GPU, RF-DETR matches or outperforms competing models when factoring in post-processing.
Note: As of now code and checkpoint for RF-DETR-large and RF-DETR-base are available.
Total Latency also Matters
- NMS in YOLO: YOLO models use Non-Maximum Suppression (NMS) to refine bounding boxes. This step can slow down inference slightly, especially if there are many objects in the frame.
- No Extra Step in DETRs: RF-DETR follows the DETR family’s approach, avoiding the need for an extra NMS step for bounding box refinement.
Latency vs. Accuracy on COCO
- Horizontal Axis (Latency): Measured in milliseconds (ms) per image on an NVIDIA T4 GPU using TensorRT10 FP16. Lower latency means faster inference here 🙂
- Vertical Axis (mAP @0.50:0.95): The mean Average Precision on the Microsoft COCO benchmark, a standard measure of detection accuracy. Higher mAP indicates better performance.
In this chart, RF-DETR demonstrates competitive accuracy with YOLO models while keeping latency in the same range. RF-DETR surpasses the 60 mAP threshold making it the first documented real-time model to achieve this performance level on COCO.
Domain Adaptability on RF100-VL
Here, RF-DETR stands out by achieving the highest mAP on RF100-VL indicating strong adaptability across varied domains. This suggests that RF-DETR is not only competitive on COCO but also excels at handling real-world datasets where domain-specific objects and conditions might differ significantly from common objects in COCO.
Potential Ranking of RF-DETR
Based on the performance metrics from the Roboflow leaderboard, RF-DETR demonstrates competitive results in both accuracy and efficiency.
- RF-DETR-Large (128M params) would rank 1st, outperforming all existing models with an estimated mAP 50:95 above 60.5, making it the most accurate model on the leaderboard.
- RF-DETR-Base (29M params) would rank around 4th place, closely competing with models like DEIM-D-FINE-X (61.7M params, 0.548 mAP 50:95) and D-FINE-X (61.6M params, 0.541 mAP 50:95). Despite its lower parameter count, it maintains a strong accuracy advantage.
This ranking further highlights RF-DETR’s efficiency, delivering high performance with optimized latency while maintaining a smaller model size compared to some competitors.
RF-DETR Architecture Overview
Historically, CNN-based YOLO models have led the pack in real-time object detection. Yet, CNNs alone do not always benefit from large-scale pre-training, which is increasingly pivotal in machine learning.
Transformers excel with large-scale pre-training but have often been too bulky(heavy) or slow for real-time applications. Recent work, however, shows that DETR-based models can match YOLO’s speed when we consider the post-processing overhead YOLO requires.
RF-DETR’s Hybrid Advantage
- Pre-trained DINOv2 Backbone: This helps the model transfer knowledge from large-scale image pre-training, boosting performance in novel or varied domains. Combining LW-DETR with a pre-trained DINOv2 backbone, RF-DETR offers exceptional domain adaptability and significant benefits from pre-training.
- Single-Scale Feature Extraction: While Deformable DETR leverages multi-scale attention, RF-DETR simplifies feature extraction to a single scale, striking a balance between speed and performance.
- Multi-Resolution Training: RF-DETR can be trained at multiple resolutions, enabling you to pick the best trade-off between speed and accuracy at inference without retraining the model.
Read this for more information, read this research paper.
How to Use RF-DETR?
Task 1: Using it for Object Detection in an Image
Install RF-DETR via:
!pip install rfdetr
You can then load a pre-trained checkpoint (trained on COCO) for immediate use in your application:
import io
import requests
import supervision as sv
from PIL import Image
from rfdetr import RFDETRBase
model = RFDETRBase()
url = "https://media.roboflow.com/notebooks/examples/dog-2.jpeg"
image = Image.open(io.BytesIO(requests.get(url).content))
detections = model.predict(image, threshold=0.5)
annotated_image = image.copy()
annotated_image = sv.BoxAnnotator().annotate(annotated_image, detections)
annotated_image = sv.LabelAnnotator().annotate(annotated_image, detections)
sv.plot_image(annotated_image)
Task 2: Using it for Object Detection in a Video
I will be providing you my Github Repository Link for you to freely implement the model yourselves 🙂. Just follow the README.md instructions to run the code.
Code:
import cv2
import numpy as np
import json
from rfdetr import RFDETRBase
# Load the model
model = RFDETRBase()
# Read the classes.json file and store class names in a dictionary
with open('classes.json', 'r', encoding='utf-8') as file:
class_names = json.load(file)
# Open the video file
cap = cv2.VideoCapture('walking.mp4') # https://www.pexels.com/video/video-of-people-walking-855564/
# Create the output video
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.mp4', fourcc, 20.0, (960, 540))
# For live video streaming:
# cap = cv2.VideoCapture(0) # 0 refers to the default camera
while True:
# Read a frame
ret, frame = cap.read()
if not ret:
break # Exit the loop when the video ends
# Perform object detection
detections = model.predict(frame, threshold=0.5)
# Mark the detected objects
for i, box in enumerate(detections.xyxy):
x1, y1, x2, y2 = map(int, box)
class_id = int(detections.class_id[i])
# Get the class name using class_id
label = class_names.get(str(class_id), "Unknown")
confidence = detections.confidence[i]
# Draw the bounding box (colored and thick)
color = (255, 255, 255) # White color
thickness = 7 # Thickness
cv2.rectangle(frame, (x1, y1), (x2, y2), color, thickness)
# Display the label and confidence score (in white color and readable font)
text = f"{label} ({confidence:.2f})"
font = cv2.FONT_HERSHEY_SIMPLEX
font_scale = 2
font_thickness = 7
text_size = cv2.getTextSize(text, font, font_scale, font_thickness)[0]
text_x = x1
text_y = y1 - 10
cv2.putText(frame, text, (text_x, text_y), font, font_scale, (0, 0, 255), font_thickness, cv2.LINE_AA)
# Display the results
resized_frame = cv2.resize(frame, (960, 540))
cv2.imshow('Labeled Video', resized_frame)
# Save the output
out.write(resized_frame)
# Exit when 'q' key is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release resources
cap.release()
out.release() # Release the output video
cv2.destroyAllWindows()
Output:
Fine-Tuning for Custom Datasets
Fine-tuning is where RF-DETR really shines especially if you’re working with niche or smaller datasets:
- Use COCO Format: Organize your dataset into train/, valid/, and test/ directories, each with its own _annotations.coco.json.
- Leverage Colab: The Roboflow team provides a detailed Colab notebook (provided by Roboflow Team) to walk you through training on your own dataset.
from rfdetr import RFDETRBase
model = RFDETRBase()
model.train(
dataset_dir="",
epochs=10,
batch_size=4,
grad_accum_steps=4,
lr=1e-4
)
During training, RF-DETR will produce:
- Regular Weights: Standard model checkpoints.
- EMA Weights: An Exponential Moving Average version of the model, often yielding more stable performance.
How to Train RF-DETR on a Custom Dataset?
As an example, Roboflow Team has used a mahjong tile recognition dataset, a part of the RF100-VL benchmark that contains over 2,000 images. This guide demonstrates how to download the dataset, install the necessary tools, and fine-tune the model on your custom data.
Refer to this blog to know more.
The resulting display should show the ground truth on one side and the model’s detections on the other. In our example, RF-DETR correctly identifies most mahjong tiles, with only minor misdetections that can be improved with further training.
Important Note:
- Instance Segmentation: RF-DETR currently does not support instance segmentation, as noted by Roboflow’s Open Source Lead, Piotr Skalski.
- Pose Estimation: Pose estimation support is also on the horizon and will be coming soon.
Final Verdict & Potential Edge Over Other CV Models
RF-DETR is one of the best real-time DETR-based models, offering a strong balance between accuracy, speed, and domain adaptability. If you need a real-time, transformer-based detector that avoids post-processing overhead and generalizes beyond COCO, this is a top contender. However, YOLOv8 still holds an edge in raw speed for some applications.
Where RF-DETR Could Outperform Other CV Models:
- Specialized Domains & Custom Datasets: RF-DETR excels in domain adaptation (86.7 mAP on RF100-VL), making it ideal for medical imaging, industrial defect detection, and autonomous navigation where COCO-trained models struggle.
- Low-Latency Applications: Since it doesn’t require NMS, it can be faster than YOLO in scenarios where post-processing adds overhead, such as drone-based detection, video analytics, or robotics.

- Transformer-Based Future-Proofing: Unlike CNN-based detectors (YOLO, Faster R-CNN), RF-DETR benefits from self-attention and large-scale pretraining (DINOv2 backbone), making it better suited for multi-object reasoning, occlusion handling, and generalization to unseen environments.
- Edge AI & Embedded Devices: RF-DETR’s 6.0ms/img inference time on a T4 GPU suggests it could be a strong candidate for real-time edge deployment where traditional DETR models are too slow.
A round of applause to the Roboflow ML team – Peter Robicheaux, James Gallagher, Joseph Nelson, Isaac Robinson.
Peter Robicheaux, James Gallagher, Joseph Nelson, Isaac Robinson. (Mar 20, 2025). RF-DETR: A SOTA Real-Time Object Detection Model. Roboflow Blog: https://blog.roboflow.com/rf-detr/
Conclusion
Roboflow’s RF-DETR represents a new generation of real-time object detection, balancing high accuracy, domain adaptability, and low latency in a single model. Whether you’re building a cutting-edge robotics system or deploying on resource-limited edge devices, RF-DETR offers a versatile and future-proof solution.
What are your thoughts? Let me know in the comment section.
Login to continue reading and enjoy expert-curated content.