Yahoo Answers Comprehensive Questions and Answers

URL

Change Log

When What
September 10th, 2015 Donated by Chirag Shah

Reference

Studies who have been using the data (in any form) are required to include the following reference:

@inproceedings{Shah:2010:EPA:1835449.1835518,
 author = {Shah, Chirag and Pomerantz, Jefferey},
 title = {Evaluating and Predicting Answer Quality in Community QA},
 booktitle = {Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval},
 series = {SIGIR '10},
 year = {2010},
 isbn = {978-1-4503-0153-4},
 location = {Geneva, Switzerland},
 pages = {411--418},
 numpages = {8},
 url = {http://doi.acm.org/10.1145/1835449.1835518},
 doi = {10.1145/1835449.1835518},
 acmid = {1835518},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {answer quality evaluation and prediction, community question answering},
}

About the Data

Overview of Data

The data collected is the Yahoo! Answers corpus as of 10/25/2007. It includes all the questions and their corresponding answers. The corpus distributed here contains 4,483,032 questions and their answers. In addition to question and answer text, the corpus contains a small amount of metadata, i.e., which answer was selected as the best answer, and the category and sub-category that was assigned to this question. No personal information is included in the corpus. The question URIs and all user ids were anonymized so that no identifying information is revealed. This dataset may be used by researchers to learn and validate answer extraction models. Part 1 is 1.7 Gbyte and part 2 is 1.9 Gybte.

Paper Abstract

Question answering (QA) helps one go beyond traditional keywords-based querying and retrieve information in more precise form than given by a document or a list of documents. Several community-based QA (CQA) services have emerged allowing information seekers pose their information need as questions and receive answers from their fellow users. A question may receive multiple answers from multiple users and the asker or the community can choose the best answer. While the asker can thus indicate if he was satisfied with the information he received, there is no clear way of evaluating the quality of that information. We present a study to evaluate and predict the quality of an answer in a CQA setting. We chose Yahoo! Answers as such CQA service and selected a small set of questions, each with at least five answers. We asked Amazon Mechanical Turk workers to rate the quality of each answer for a given question based on 13 different criteria. Each answer was rated by five different workers. We then matched their assessments with the actual asker’s rating of a given answer. We show that the quality criteria we used faithfully match with asker’s perception of a quality answer. We furthered our investigation by extracting various features from questions, answers, and the users who posted them, and training a number of classifiers to select the best answer using those features. We demonstrate a high predictability of our trained models along with the relative merits of each of the features for such prediction. These models support our argument that in case of CQA, contextual information such as a user’s profile, can be critical in evaluating and predicting content quality.