Database of Human Attributes (HAT)Gaurav Sharma and Frederic Jurie, BMVC 2011
We introduce a new database  for learning semantic human attributes. Our database contains 9344 images, with annotations for 27 attributes.
To obtain a large number of images we used an automatic program to query and download the top ranked result images from the popular image sharing site Flickr, with manually specified queries. We used more than 320 queries, chosen so as to retrieve predominantly images of people (e.g.\ 'soccer kid' cf.\ 'sunset'). A state-of-the-art person detector  was used to obtain the human images with the few false positives removed manually.
The database contains a wide variety of human images in different poses (standing, sitting, running, turned back etc.), of different ages (baby, teen, young, middle aged, elderly etc.), wearing different clothes (tee-shirt, suits, beachwear, shorts etc.) and accessories (sunglasses, bag etc.) and is, thus, rich in semantic attributes for humans. It also has high variation in scale (only upper body to the full person) and size of the images. The high variation makes it a challenging database. Figure above shows some example images for some of the attributes (the images in the figures are scaled to the same height for visualization).
The database has train, val and test sets. The models are learnt with the train and val sets while the average precision for each attribute on the test set is reported as the performance measure. The overall performance is given by the mean average precision over the set of attributes.
Kindly cite Sharma and Jurie  when using the database.
Kindly contact Gaurav Sharma for the database. Question, comments etc. about the database are welcome.
- P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009.
- G. Sharma and F. Jurie, Learning discriminative spatial representation for image classification, British Machine Vision Conference, 2011 [Paper] [Abstract] [BibTex]