Coopetition in Information Retrieval Research

#acmcomputerprinceton #informationretrieval
Share

This talk explores an interesting approach for text retrieval: a community-based approach called "coopetitions."

Coopetitions are activities in which competitors cooperate for a common good. Community evaluations such as the Text REtrieval Conference (TREC, trec.nist.gov) are prototypical examples of coopetitions in information retrieval, and they have now been part of the field for more than thirty years. This longevity and the proliferation of shared evaluation tasks suggest that, indeed, the net impact of community evaluations is positive.

Coopetitions can improve effectiveness for a retrieval task by setting up a collaborative structure: establishing a research cohort and constructing the infrastructure (including problem definition, test collections, scoring metrics, and research methodology) that the participants need to make progress on the task. They can also facilitate technology transfer and amortize the infrastructure costs. Yet these benefits only accrue when the infrastructure is a good abstraction of the real task. Information retrieval's test collection paradigm is becoming increasingly untenable as corpus size grows and search engine effectiveness improves. This talk will review what we have learned about test-collection-based evaluation of search engines from TREC and examine the prospects of search evaluation in the future.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 16 Feb 2023
  • Time: 08:00 PM to 09:30 PM
  • All times are (UTC-05:00) Eastern Time (US & Canada)
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Princeton University
  • Princeton, New Jersey
  • United States 08544
  • Building: Computer Science Building
  • Room Number: Room 105

  • Contact Event Host


  Speakers

Ellen Voorhees

Topic:

Coopetition in Information Retrieval Research

Biography:

Ellen Voorhees is a Fellow at the US National Institute of Standards and Technology (NIST).  For most of her tenure at NIST she managed the Text REtrieval Conference (TREC) project, a project that develops the infrastructure required for large-scale evaluation of search engines and other information access technology.  Voorhees' general research focuses on developing and validating appropriate evaluation schemes to measure system effectiveness for diverse user tasks.

Voorhees is a fellow of the ACM, a member of the ACM SIGIR Academy, and has been elected as a fellow of the Washington Academy of Sciences. She has published numerous articles on information retrieval techniques and evaluation methodologies and serves on the review boards of several journals and conferences.