Datasets
1) Outdoor panoramic image dataset for semantic segmentation
Related publication: Orhan, S., Bastanlar, Y. Semantic segmentation of outdoor panoramic images, Signal Image and Video Processing, 2021 DOI:10.1007/s11760-021-02003-3
2) Camera-trap dataset for animal detection or animal/non-animal image classification
* The dataset belongs to and is shared with the permission of Republic of Turkey, Ministry of Forest and Water Affairs. It contains various species and sizes of animals. 2585 images contain animals (some contain multiple animals) and 1420 images do not contain any animal. Image size varies from 1024x1280 to 2448x3264. About half of the images are captured at night while the other half is captured at daytime.
* Image annotations are in the form of PASCAL VOC annotation (.xml) files. More information on PASCAL VOC Challenge is here. Annotation is made only for animals - without defining species. In other words, if no object tag exists in the .xml file, that means no animal exists in that image.
* Images are grouped according to their associated camera-traps. File name format is {DatasetAbbreviation}_{Field}_{Camera}_{Image Number}. For example, OB_A1_K1_0183.JPG denotes it is the 183th image from the first camera (K1) of the A1 field. OB is fixed for this dataset.
* This is the annotated part of the dataset. Original dataset (all images from all cameras) also available upon request.
Related publication: Tekeli, U., Bastanlar, Y. Elimination of Useless Images from Raw Camera-Trap Data, TJEECS, 2019, Volume 27, pp 2395-2411, DOI:10.3906/elk-1808-130
3) Fisheye video dataset for vehicle classification
Videos containing cars, vans, motorcycles or pedestrians. Some videos contain more than one object (some with occlusions), they are mentioned in the spreadsheet provided.
Related publication: Baris, I. and Bastanlar, Y., Classification and Tracking of Traffic Scene Objects with Hybrid Camera Systems, IEEE International Transportation Systems Conference (ITSC 2017), 16-19 October 2017, Yokohama, Japan. copyright:IEEE https://ieeexplore.ieee.org/document/8317588/
4) Catadioptric camera video dataset for vehicle classification
Set 1: contains 124 videos for cars (in 5 parts, ~270 MB each).
Set 2: contains 104 videos for vans (minibus) (in 4 parts, ~290 MB each).
Set 3: contains 49 videos for motorcycles (in 2 parts, ~270 MB each).
For each video the following data is included in the zip files: i) the video itself (avi format), ii) the foreground mask of each frame obtained with background subtraction, iii) the foreground mask of each frame containing the vehicle, iv) annotated area covered by the vehicle (to be used as groundtruth) while vehicle is at the closest point to the camera.
Related publication: Karaimer, H.C. and Bastanlar, Y., Detection and Classification of Vehicles from Omnidirectional Videos using Temporal Average of Silhouettes, Int. Conference on Computer Vision Theory and Applications (2015).
5) Omnidirectional and panoramic image dataset (with annotations) to be used for human and car detection
Dataset also contains panoramic images converted from the omnidirectional ones. Annotations are provided in three sets: i) rectangular bounding boxes sliding and rotating around image center for omnidirectional images, ii) proposed (dough-nut slice) annotations for omnidirectional images, iii) standard (up-right) bounding box annotations for panoramic images.
Dataset 2 (5 MB) contains synthetic catadioptric omnidirectional images which are formed by projecting perspective images to a 'defined' omnidirectional camera. One set projects 210 perspective images from INRIA person dataset, the other one projects 466 car side-views from UIUC and Darmstadt perspective image datasets.
Related publication 1: Cinaroglu, I. and Bastanlar, Y. (2014), A Direct Approach for Human Detection with Catadioptric Omnidirectional Cameras, IEEE Conference on Signal Processing and Communications Applications (SIU) 2014.
Related publication 2: Cinaroglu, I. and Bastanlar, Y. (2014), A Direct Approach for Object Detection with Catadioptric Omnidirectional Cameras, Signal, Image and Video Processing, Volume 10(2), February 2016, Pages 413-420. DOI:10.1007/s11760-015-0768-2.
6) Panoramic image dataset (with annotations) to be used for car detection
Related publication: Karaimer, H.C. and Bastanlar, Y. (2014), Car Detection with Omnidirectional Cameras Using Haar-like Features and Cascaded Boosting (in Turkish), IEEE Conference on Signal Processing and Communications Applications (SIU) 2014.
7) Image datasets to be used in studies on hybrid structure-from-motion and omnidirectional camera calibration
catadioptric | fisheye | perspective |
* Hybrid Set 1: A perspective and a catadioptric omnidirectional camera (together with camera parameters and calibration images).
* Hybrid Set 2: An omnidirectional camera and some frames (approx. 1/sec.) from a perspective footage (with camera parameters and calibration images).
* Hybrid Set 3: A perspective, a fisheye and a para-catadioptric omnidirectional camera (together with camera parameters and calibration images).
* Hybrid Set 4: A perspective and a catadioptric omnidirectional camera (together with camera parameters and calibration images).
* Hyperbolic Calibration Set: A set of calibration images for a catadioptric camera with an hyperbolic mirror (NeoVision H3S).
Related publication: Bastanlar et al.(2012), Multi-view Structure-from-Motion for Hybrid Camera Scenarios, Image and Vision Computing, vol.30(8), pp.557-572.