Datasets

Links to free data sets for computer vision applications. If you would like to submit a link, please contact us.

VENTURI Mountain Dataset

The VENTURI Mountain Dataset is a collection of 12 outdoor sequences, captured with a smartphone and manually verified and annotated with Ground Truth data.

UNICT-FD889

UNICT-FD889 dataset is a food dataset composed by 889 distinct plates of food.

TIme Square Intersection (TISI) Dataset

The TIme Square Intersection (TISI) dataset was collected from a publicly accessible webcam for high-level event based video synopsis research.

Educational Resource Centre (ERCe) Dataset

The Educational Resource Centre (ERCe) dataset was collected from a publicly accessible webcam deployed on a university campus across about 2 months for semantic event based video synopsis research

CUHK Crowd Dataset

The dataset contains 474 video clips from 215 crowded scenes.

LFW: Labeled Faces in the Wild

Labeled Faces in the Wild is a data set of face photographs designed for studying the problem of unconstrained face recognition.

California-ND

Managing photo collections involves a variety of image quality assessment tasks, e.g. the selection of the "best" photos. Detecting near-duplicates is a prerequisite for automating these tasks.

Extreme View Dataset

This dataset is a two-view matching evaluation dataset with extreme viewpoint changes.

UvA Person Tracking Benchmarks

Various benchmarks related to 3D (single, multiple) person tracking and pose recovery from overlapping monocular cameras. In- and outdoor.

Daimler Pedestrian Benchmarks

Various benchmarks related to pedestrian detection, classification, segmentation and path prediction. Pedestrian data as observed from on-board a vehicle in traffic. Mono, stereo and multi-cue.

Caltech Pedestrian Detection Benchmark

The Caltech Pedestrian Dataset consists of approximately 10 hours of 640x480 30Hz video taken from a vehicle driving through regular traffic in an urban environment.

Berkeley Multimodal Human Action Database (MHAD)

The Berkeley Multimodal Human Action Database (MHAD) contains 11 actions performed by 7 male and 5 female subjects in the range 23-30 years of age except for one elderly subject.

Pages