Senior Research Scientist
NVIDIA
We provide the REalistic and Dynamic Scenes dataset for video deblurring and super-resolution. Train and validation subsets are publicly available. The dataset can be downloaded by running the python code or clicking the links below. Downloads are available via Google Drive and SNU CVLab server.
Type | Train | Validation | Test |
---|---|---|---|
Sharp | train_sharp | val_sharp | test_sharp |
Blur | train_blur | val_blur | test_blur |
Blur + MPEG | train_blur_comp | val_blur_comp | test_blur_comp |
Low Resolution | train_sharp_bicubic | val_sharp_bicubic | test_sharp_bicubic |
Blur + Low Resolution | train_blur_bicubic | val_blur_bicubic | test_blur_bicubic |
Blur + JPEG | train_blur_jpeg | val_blur_jpeg | test_blur_jpeg |
Type | Train | Validation | Test |
---|---|---|---|
Sharp | train_sharp | val_sharp | test_sharp |
Blur | train_blur | val_blur | test_blur |
Blur + MPEG | train_blur_comp | val_blur_comp | test_blur_comp |
Low Resolution | train_sharp_bicubic | val_sharp_bicubic | test_sharp_bicubic |
Blur + Low Resolution | train_blur_bicubic | val_blur_bicubic | test_blur_bicubic |
Blur + JPEG | train_blur_jpeg | val_blur_jpeg | test_blur_jpeg |
REDS dataset is generated from 120 fps videos, synthesizing blurry frames by merging subsequent frames. The frames that are used to generate blurry images are available below for training and validation data. Due to the large file sizes, the dataset is divided into multiple zip files. Each zip file contains 15 sequences of length 500 that is equivalent to the time duration of the standard 24 fps version above. Each file size is around 10 GB.
Type | Google Drive | SNU CVLab Server |
---|---|---|
train_orig_part0.zip | link | link |
train_orig_part1.zip | link | link |
train_orig_part2.zip | link | link |
train_orig_part3.zip | link | link |
train_orig_part4.zip | link | link |
train_orig_part5.zip | link | link |
train_orig_part6.zip | link | link |
train_orig_part7.zip | link | link |
train_orig_part8.zip | link | link |
train_orig_part9.zip | link | link |
train_orig_part10.zip | link | link |
train_orig_part11.zip | link | link |
train_orig_part12.zip | link | link |
train_orig_part13.zip | link | link |
train_orig_part14.zip | link | link |
train_orig_part15.zip | link | link |
val_orig_part0.zip | link | link |
val_orig_part1.zip | link | link |
Inverse crf file:
The inverse camera response is obtained from cv2.createCalibrateRobertson()
function. Following the opencv convention, the color is in BGR order.
crf_inv[:, 0, 0] # B
crf_inv[:, 0, 1] # G
crf_inv[:, 0, 2] # R
crf_path = 'crf.pt'
crf_inv = torch.from_numpy(torch.load(crf_path)).to(device, dtype).squeeze_() # 256 x 3
# 1st order approximation at RGB=250 to regularize extreme responses at RGB>250
diff = (crf_inv[251] - crf_inv[249])/2
for i in range(251, 256):
crf_inv[i] = crf_inv[i-1] + diff
...
# frame interpolation, etc.
buffer_tensor = buffer_tensor.permute(0,2,3,1).reshape(-1, C).mul_(255).add_(0.5).clamp_(0, 255).long() # bad naming to save GPU memory
buffer_tensor = torch.gather(crf_inv, 0, buffer_tensor).reshape(-1,H,W,C).permute(0,3,1,2)
Until the official submission site is published, we accept email submissions to seungjun.nah@gmail.com. The results will be replied back after being manually evaluated and will be posted on the leaderboard later.
Due to the traffic limit of Google Drive, downloads are now available from SNU CVLab server, too.
We are planning to host an open-end public leaderboard site for the REDS related challenges. We will provide evaluated PSNR and SSIM for the submitted test results. Stay tuned for the updates!
The REDS dataset was used in the NTIRE 2019 and NTIRE 2020 Challenges. If you find our dataset useful for your research, please consider citing our work:
@InProceedings{Nah_2019_CVPR_Workshops_REDS,
author = {Nah, Seungjun and Baik, Sungyong and Hong, Seokil and Moon, Gyeongsik and Son, Sanghyun and Timofte, Radu and Lee, Kyoung Mu},
title = {NTIRE 2019 Challenge on Video Deblurring and Super-Resolution: Dataset and Study},
booktitle = {CVPR Workshops},
month = {June},
year = {2019}
}
@InProceedings{Nah_2019_CVPR_Workshops_Deblur,
author = {Nah, Seungjun and Timofte, Radu and Baik, Sungyong and Hong, Seokil and Moon, Gyeongsik and Son, Sanghyun and Lee, Kyoung Mu},
title = {NTIRE 2019 Challenge on Video Deblurring: Methods and Results},
booktitle = {CVPR Workshops},
month = {June},
year = {2019}
}
@InProceedings{Nah_2019_CVPR_Workshops_SR,
author = {Nah, Seungjun and Timofte, Radu and Gu, Shuhang and Baik, Sungyong and Hong, Seokil and Moon, Gyeongsik and Son, Sanghyun and Lee, Kyoung Mu},
title = {NTIRE 2019 Challenge on Video Super-Resolution: Methods and Results},
booktitle = {CVPR Workshops},
month = {June},
year = {2019}
}
@InProceedings{Nah_2020_CVPR_Workshops_Deblur,
author = {Nah, Seungjun and Son, Sanghyun and Timofte, Radu and Lee, Kyoung Mu},
title = {NTIRE 2020 Challenge on Image and Video Deblurring},
booktitle = {CVPR Workshops},
month = {June},
year = {2020}
}
@InProceedings{Nah_2021_CVPR_Workshops,
author = {Nah, Seungjun and Son, Sanghyun and Lee, Suyoung and Timofte, Radu and Lee, Kyoung Mu},
title = {NTIRE 2021 Challenge on Image Deblurring},
booktitle = {CVPR Workshops},
month = {June},
year = {2021},
pages = {149-165}
}
@InProceedings{Son_2021_CVPR_Workshops,
author = {Son, Sanghyun and Lee, Suyoung and Nah, Seungjun and Timofte, Radu and Lee, Kyoung Mu},
title = {NTIRE 2021 Challenge on Video Super-Resolution},
booktitle = {CVPR Workshops},
month = {June},
year = {2021},
pages = {166-181}
}
REDS dataset is released under CC BY 4.0 license.