Abstract

IBM Research undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV quiz show, Jeopardy. The extent of the challenge includes fielding a real‐time automatic contestant on the show, not merely a laboratory exercise. The Jeopardy Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After three years of intense research and development by a core team of about 20 researchers, Watson is performing at human expert levels in terms of precision, confidence, and speed at the Jeopardy quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that can be used as a foundation for combining, deploying, evaluating, and advancing a wide range of algorithmic techniques to rapidly advance the field of question answering (QA).

Keywords

WatsonChampionIBMComputer scienceArchitectureField (mathematics)Software engineeringArtificial intelligencePolitical science

Affiliated Institutions

Related Publications

XORP

Network researchers face a significant problem when deploying software in routers, either for experimentation or for pilot deployment. Router platforms are generally not open sy...

2003 ACM SIGCOMM Computer Communication Re... 154 citations

Publication Info

Year
2010
Type
article
Volume
31
Issue
3
Pages
59-79
Citations
1492
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1492
OpenAlex
87
Influential
693
CrossRef

Cite This

David Ferrucci, Eric W. Brown, Jennifer Chu‐Carroll et al. (2010). Building Watson: An Overview of the DeepQA Project. AI Magazine , 31 (3) , 59-79. https://doi.org/10.1609/aimag.v31i3.2303

Identifiers

DOI
10.1609/aimag.v31i3.2303

Data Quality

Data completeness: 81%