Seungjun Nah

Logo

Senior Research Scientist
NVIDIA

GitHub

Google Scholar

Publications | Datasets | CV


REDS dataset

Image

We provide the REalistic and Dynamic Scenes dataset for video deblurring and super-resolution. Train and validation subsets are publicly available. The dataset can be downloaded by running the python code or clicking the links below. Downloads are available via Google Drive and SNU CVLab server.

download_REDS.py

Google Drive

Type Train Validation Test
Sharp train_sharp val_sharp test_sharp
Blur train_blur val_blur test_blur
Blur + MPEG train_blur_comp val_blur_comp test_blur_comp
Low Resolution train_sharp_bicubic val_sharp_bicubic test_sharp_bicubic
Blur + Low Resolution train_blur_bicubic val_blur_bicubic test_blur_bicubic
Blur + JPEG train_blur_jpeg val_blur_jpeg test_blur_jpeg

SNU CVLab Server

Type Train Validation Test
Sharp train_sharp val_sharp test_sharp
Blur train_blur val_blur test_blur
Blur + MPEG train_blur_comp val_blur_comp test_blur_comp
Low Resolution train_sharp_bicubic val_sharp_bicubic test_sharp_bicubic
Blur + Low Resolution train_blur_bicubic val_blur_bicubic test_blur_bicubic
Blur + JPEG train_blur_jpeg val_blur_jpeg test_blur_jpeg

REDS 120fps

REDS dataset is generated from 120 fps videos, synthesizing blurry frames by merging subsequent frames. The frames that are used to generate blurry images are available below for training and validation data. Due to the large file sizes, the dataset is divided into multiple zip files. Each zip file contains 15 sequences of length 500 that is equivalent to the time duration of the standard 24 fps version above. Each file size is around 10 GB.

Type Google Drive SNU CVLab Server
train_orig_part0.zip link link
train_orig_part1.zip link link
train_orig_part2.zip link link
train_orig_part3.zip link link
train_orig_part4.zip link link
train_orig_part5.zip link link
train_orig_part6.zip link link
train_orig_part7.zip link link
train_orig_part8.zip link link
train_orig_part9.zip link link
train_orig_part10.zip link link
train_orig_part11.zip link link
train_orig_part12.zip link link
train_orig_part13.zip link link
train_orig_part14.zip link link
train_orig_part15.zip link link
val_orig_part0.zip link link
val_orig_part1.zip link link

Camera response function

Inverse crf file:

The inverse camera response is obtained from cv2.createCalibrateRobertson() function. Following the opencv convention, the color is in BGR order.

crf_inv[:, 0, 0]  # B
crf_inv[:, 0, 1]  # G
crf_inv[:, 0, 2]  # R

Image

crf_path = 'crf.pt'
crf_inv = torch.from_numpy(torch.load(crf_path)).to(device, dtype).squeeze_()  # 256 x 3

# 1st order approximation at RGB=250 to regularize extreme responses at RGB>250
diff = (crf_inv[251] - crf_inv[249])/2
for i in range(251, 256):
    crf_inv[i] = crf_inv[i-1] + diff

...
# frame interpolation, etc.

buffer_tensor = buffer_tensor.permute(0,2,3,1).reshape(-1, C).mul_(255).add_(0.5).clamp_(0, 255).long() # bad naming to save GPU memory
buffer_tensor = torch.gather(crf_inv, 0, buffer_tensor).reshape(-1,H,W,C).permute(0,3,1,2)

Updates

Reference

The REDS dataset was used in the NTIRE 2019 and NTIRE 2020 Challenges. If you find our dataset useful for your research, please consider citing our work:

@InProceedings{Nah_2019_CVPR_Workshops_REDS,
  author = {Nah, Seungjun and Baik, Sungyong and Hong, Seokil and Moon, Gyeongsik and Son, Sanghyun and Timofte, Radu and Lee, Kyoung Mu},
  title = {NTIRE 2019 Challenge on Video Deblurring and Super-Resolution: Dataset and Study},
  booktitle = {CVPR Workshops},
  month = {June},
  year = {2019}
}
@InProceedings{Nah_2019_CVPR_Workshops_Deblur,
  author = {Nah, Seungjun and Timofte, Radu and Baik, Sungyong and Hong, Seokil and Moon, Gyeongsik and Son, Sanghyun and Lee, Kyoung Mu},
  title = {NTIRE 2019 Challenge on Video Deblurring: Methods and Results},
  booktitle = {CVPR Workshops},
  month = {June},
  year = {2019}
}
@InProceedings{Nah_2019_CVPR_Workshops_SR,
  author = {Nah, Seungjun and Timofte, Radu and Gu, Shuhang and Baik, Sungyong and Hong, Seokil and Moon, Gyeongsik and Son, Sanghyun and Lee, Kyoung Mu},
  title = {NTIRE 2019 Challenge on Video Super-Resolution: Methods and Results},
  booktitle = {CVPR Workshops},
  month = {June},
  year = {2019}
}
@InProceedings{Nah_2020_CVPR_Workshops_Deblur,
  author = {Nah, Seungjun and Son, Sanghyun and Timofte, Radu and Lee, Kyoung Mu},
  title = {NTIRE 2020 Challenge on Image and Video Deblurring},
  booktitle = {CVPR Workshops},
  month = {June},
  year = {2020}
}
@InProceedings{Nah_2021_CVPR_Workshops,
  author = {Nah, Seungjun and Son, Sanghyun and Lee, Suyoung and Timofte, Radu and Lee, Kyoung Mu},
  title = {NTIRE 2021 Challenge on Image Deblurring},
  booktitle = {CVPR Workshops},
  month = {June},
  year = {2021},
  pages = {149-165}
}
@InProceedings{Son_2021_CVPR_Workshops,
  author = {Son, Sanghyun and Lee, Suyoung and Nah, Seungjun and Timofte, Radu and Lee, Kyoung Mu},
  title = {NTIRE 2021 Challenge on Video Super-Resolution},
  booktitle = {CVPR Workshops},
  month = {June},
  year = {2021},
  pages = {166-181}
}

LICENSE

REDS dataset is released under CC BY 4.0 license.