Datasets

Yalin Bastanlar - Datasets


1) Outdoor panoramic image dataset for semantic segmentation

Dataset contains semantically pixel-level annotated 600 outdoor panoramic images (20 semantic classes, grouped into 7 categories) belonging to the city of Pittsburgh and can be downloaded via project's github repo.

Related publication: Orhan, S., Bastanlar, Y. Semantic segmentation of outdoor panoramic images, Signal Image and Video Processing, 2021 DOI:10.1007/s11760-021-02003-3


2) Camera-trap dataset for animal detection or animal/non-animal image classification

* The dataset consists of 4005 images and can be downloaded as zip files (randomly split 8 files, ~300 MB each) using the link given here.
* The dataset belongs to and is shared with the permission of Republic of Turkey, Ministry of Forest and Water Affairs. It contains various species and sizes of animals. 2585 images contain animals (some contain multiple animals) and 1420 images do not contain any animal. Image size varies from 1024x1280 to 2448x3264. About half of the images are captured at night while the other half is captured at daytime.
* Image annotations are in the form of PASCAL VOC annotation (.xml) files. More information on PASCAL VOC Challenge is here. Annotation is made only for animals - without defining species. In other words, if no object tag exists in the .xml file, that means no animal exists in that image.
* Images are grouped according to their associated camera-traps. File name format is {DatasetAbbreviation}_{Field}_{Camera}_{Image Number}. For example, OB_A1_K1_0183.JPG denotes it is the 183th image from the first camera (K1) of the A1 field. OB is fixed for this dataset.
* This is the annotated part of the dataset. Original dataset (all images from all cameras) also available upon request.

Related publication: Tekeli, U., Bastanlar, Y. Elimination of Useless Images from Raw Camera-Trap Data, TJEECS, 2019, Volume 27, pp 2395-2411, DOI:10.3906/elk-1808-130


3) Fisheye video dataset for vehicle classification

The dataset consists of more than 100 .avi files and can be downloaded as rar files (~200 MB each) using the link given here.
Videos containing cars, vans, motorcycles or pedestrians. Some videos contain more than one object (some with occlusions), they are mentioned in the spreadsheet provided.

Related publication: Baris, I. and Bastanlar, Y., Classification and Tracking of Traffic Scene Objects with Hybrid Camera Systems, IEEE International Transportation Systems Conference (ITSC 2017), 16-19 October 2017, Yokohama, Japan. copyright:IEEE https://ieeexplore.ieee.org/document/8317588/


4) Catadioptric camera video dataset for vehicle classification

The dataset can be downloaded via three different zip files using the links given below.
Set 1: contains 124 videos for cars (in 5 parts, ~270 MB each).
Set 2: contains 104 videos for vans (minibus) (in 4 parts, ~290 MB each).
Set 3: contains 49 videos for motorcycles (in 2 parts, ~270 MB each).

For each video the following data is included in the zip files: i) the video itself (avi format), ii) the foreground mask of each frame obtained with background subtraction, iii) the foreground mask of each frame containing the vehicle, iv) annotated area covered by the vehicle (to be used as groundtruth) while vehicle is at the closest point to the camera.

Related publication: Karaimer, H.C. and Bastanlar, Y., Detection and Classification of Vehicles from Omnidirectional Videos using Temporal Average of Silhouettes, Int. Conference on Computer Vision Theory and Applications (2015).


5) Omnidirectional and panoramic image dataset (with annotations) to be used for human and car detection

Dataset 1 (37 MB) contains 30 omnidirectional images to detect (standing) humans (66 annotated instances) and 50 omnidirectional images to detect (side-view) cars (65 annotated instances).
Dataset also contains panoramic images converted from the omnidirectional ones. Annotations are provided in three sets: i) rectangular bounding boxes sliding and rotating around image center for omnidirectional images, ii) proposed (dough-nut slice) annotations for omnidirectional images, iii) standard (up-right) bounding box annotations for panoramic images.

Dataset 2 (5 MB) contains synthetic catadioptric omnidirectional images which are formed by projecting perspective images to a 'defined' omnidirectional camera. One set projects 210 perspective images from INRIA person dataset, the other one projects 466 car side-views from UIUC and Darmstadt perspective image datasets.

Related publication 1: Cinaroglu, I. and Bastanlar, Y. (2014), A Direct Approach for Human Detection with Catadioptric Omnidirectional Cameras, IEEE Conference on Signal Processing and Communications Applications (SIU) 2014.
Related publication 2: Cinaroglu, I. and Bastanlar, Y. (2014), A Direct Approach for Object Detection with Catadioptric Omnidirectional Cameras, Signal, Image and Video Processing, Volume 10(2), February 2016, Pages 413-420. DOI:10.1007/s11760-015-0768-2.


6) Panoramic image dataset (with annotations) to be used for car detection

Dataset (5MB) contains i) 25 para-catadioptric images, ii) 50 cylindrical panoramic images obtained from the catadioptric images (25 original and 25 mirrored), annotations for cars in those images, iii) 50 spherical panoramic images obtained from the catadioptric images, annotations for cars in those images.

Related publication: Karaimer, H.C. and Bastanlar, Y. (2014), Car Detection with Omnidirectional Cameras Using Haar-like Features and Cascaded Boosting (in Turkish), IEEE Conference on Signal Processing and Communications Applications (SIU) 2014.


7) Image datasets to be used in studies on hybrid structure-from-motion and omnidirectional camera calibration

catadioptric fisheye perspective

* Hybrid Set 1: A perspective and a catadioptric omnidirectional camera (together with camera parameters and calibration images).
* Hybrid Set 2: An omnidirectional camera and some frames (approx. 1/sec.) from a perspective footage (with camera parameters and calibration images).
* Hybrid Set 3: A perspective, a fisheye and a para-catadioptric omnidirectional camera (together with camera parameters and calibration images).
* Hybrid Set 4: A perspective and a catadioptric omnidirectional camera (together with camera parameters and calibration images).
* Hyperbolic Calibration Set: A set of calibration images for a catadioptric camera with an hyperbolic mirror (NeoVision H3S).

Related publication: Bastanlar et al.(2012), Multi-view Structure-from-Motion for Hybrid Camera Scenarios, Image and Vision Computing, vol.30(8), pp.557-572.