SemEval 2019

From Openresearch
Jump to: navigation, search
SemEval 2019
13th International Workshop on Semantic Evaluation 2019
Event in series SemEval
Dates 2019/06/06 (iCal) - 2019/06/07
Homepage: http://alt.qcri.org/semeval2019/
Location
Location: Minneapolis, MN, USA
Loading map...

Important dates
Tutorials: 2019/03/14
Papers: 2019/02/28
Submissions: 2019/02/28
Notification: 2019/04/06
Camera ready due: 2019/04/20
Committees
Organizers: Jonathan May, Ekaterina Shutova, Aurelie Herbelot, Xiaodan Zhu, Marianna Apidianaki, Saif M. Mohammad
Keynote speaker: Samuel R. Bowman
Table of Contents


Event


SemEval has evolved from the SensEval word sense disambiguation evaluation series. The SemEval wikipedia entry and the ACL SemEval Wiki provide a more detailed historical overview. SemEval-2019 will be the 13th workshop on semantic evaluation.

see also NAACL_HLT_2019

SemEval-2019 will be held June 6-7, 2019 in Minneapolis, USA, collocated with the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019).

Important Dates

Task Proposals:

  • 26 Mar 2018: Task proposals due
  • 04 May 2018: Task proposal notifications

Setup for the Competition: 20 Aug 2018: CodaLab competition website ready and made public. Should include basic task description and mailing group information for the task. Trial data ready. Evaluation script ready for participants to download and run on the trial data. 17 Sep 2018: Training data ready. Development data ready. CodaLab competition website updated to include an evaluation script uploaded as part of the competition so that participants can upload submissions on the development set and the script immediately checks the submission for format and computes the results on the development set. This is also the date by which a benchmark system should be made available to participants. Also, the organizers should run the submission created with the benchmark system on CodaLab, so that participants can see its results on the LeaderBoard.

Competition and Beyond:

  • 10 Jan 2019: Evaluation start*
  • 31 Jan 2019: Evaluation end*
  • 05 Feb 2019: Results posted
  • 28 Feb 2019: System and Task description paper submissions due by 23:59 GMT -12:00
  • 14 Mar 2019 Paper reviews due (for both systems and tasks)
  • 06 Apr 2019: Author notifications
  • 20 Apr 2019: Camera ready submissions due
  • Summer 2019: SemEval 2019
  • 10 Jan to 31 Jan 2019 is the period during which the task organizers must schedule the evaluation periods for their individual tasks. Usually, evaluation periods for individual tasks are 7 to 14 days, but there is no hard and fast rule about this. Contact the task organizers for the tasks you are interested in for the exact time frame when they will conduct their evaluations. They should tell you the date by which they will release the test data, and the date by which participant submissions are to be uploaded. Note that some tasks may involve more than one sub-task, each having a separate evaluation time frame.

Organizers

  • Jonathan May, ISI, University of Southern California
  • Ekaterina Shutova, University of Cambridge
  • Aurelie Herbelot, University of Trento
  • Xiaodan Zhu, Queen's University
  • Marianna Apidianaki, LIMSI, CNRS, Université Paris-Saclay & University of Pennsylvania
  • Saif M. Mohammad, National Research Council Canada


Introduction

Welcome to SemEval-2019! The Semantic Evaluation (SemEval) series of workshops focuses on the evaluation and comparison of systems that can analyse diverse semantic phenomena in text with the aim of extending the current state of the art in semantic analysis and creating high quality annotated datasets in a range of increasingly challenging problems in natural language semantics. SemEval provides an exciting forum for researchers to propose challenging research problems in semantics and to build systems/techniques to address such research problems. SemEval-2019 is the thirteenth workshop in the series of International Workshops on Semantic Evaluation. The first three workshops, SensEval-1 (1998), SensEval-2 (2001), and SensEval-3 (2004), focused on word sense disambiguation, each time growing in the number of languages offered, in the number of tasks, and also in the number of participating teams. In 2007, the workshop was renamed to SemEval, and the subsequent SemEval workshops evolved to include semantic analysis tasks beyond word sense disambiguation. In 2012, SemEval turned into a yearly event. It currently runs every year, but on a two-year cycle, i.e., the tasks for SemEval 2019 were proposed in 2018. SemEval-2019 was co-located with the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2019) in Minneapolis, Minnesota, USA.