CLEF: Difference between revisions

From Openresearch
Jump to navigation Jump to search
Created page with "{{Event series |Acronym=CLEF |Title=Conference and Labs of the Evaluation Forum |Field=Information Systems |Homepage=http://www.clef-initiative.eu/ }} The CLEF Initiative (Co..."
 
mNo edit summary
Line 2: Line 2:
|Acronym=CLEF
|Acronym=CLEF
|Title=Conference and Labs of the Evaluation Forum
|Title=Conference and Labs of the Evaluation Forum
|Field=Information Systems
|Field=Natural language processing
|Homepage=http://www.clef-initiative.eu/
|Homepage=http://www.clef-initiative.eu/
}}
}}
The CLEF Initiative (Conference and Labs of the Evaluation Forum, formerly known as Cross-Language Evaluation Forum) is a self-organized body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual and multimodal information with various levels of structure. CLEF promotes research and development by providing an infrastructure for:
The CLEF Initiative (Conference and Labs of the Evaluation Forum, formerly known as Cross-Language Evaluation Forum) is a self-organized body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual and multimodal information with various levels of structure. CLEF promotes research and development by providing an infrastructure for:


     multilingual and multimodal system testing, tuning and evaluation;
     * multilingual and multimodal system testing, tuning and evaluation;
     investigation of the use of unstructured, semi-structured, highly-structured, and semantically enriched data in information access;
    *     investigation of the use of unstructured, semi-structured, highly-structured, and semantically enriched data in information access;
     creation of reusable test collections for benchmarking;
    *     creation of reusable test collections for benchmarking;
     exploration of new evaluation methodologies and innovative ways of using experimental data;
    *     exploration of new evaluation methodologies and innovative ways of using experimental data;
     discussion of results, comparison of approaches, exchange of ideas, and transfer of knowledge.
    *     discussion of results, comparison of approaches, exchange of ideas, and transfer of knowledge.

Revision as of 11:01, 19 May 2020

CLEF
Conference and Labs of the Evaluation Forum
Avg. acceptance rate: 0
Avg. acceptance rate (last 5 years): 0
Table of Contents

Conference and Labs of the Evaluation Forum (CLEF) has an average acceptance rate of 0% (last 5 years 0%).

Events

The following events of the series CLEF are currently known in this wiki:

 OrdinalFromToCityCountryGeneral chairPC chairAcceptance rateAttendees
CLEF 202112Sep 21Sep 24BucharestRomaniaBogdan Ionescu
K. Selcuk Candan
Birger Larsen
Henning Müller
Lorraine Goeuriot
CLEF 2020Sep 22Sep 25ThessalonikiOnlineAvi Arampatzis
Theodora Tsikrika
Evangelos Kanoulas
Stefanos Vrochidis
Hideo Joho
Christina Lioma
CLEF 2019Sep 9Sep 12LuganoSwitzerlandFabio Crestani
Martin Braschler
Jacques Savoy
Andreas Rauber
CLEF 2018Sep 10Sep 14AvignonFrancePatrice Bellot
Chiraz Trabelsi
Josiane Mothe
Fionn Murtagh
CLEF 2017Sep 11Sep 14DublinIrelandGareth J. F. Jones
Séamus Lawless
Julio Gonzalo
Liadh Kelly

Number of Submitted and Accepted Papers (Main Track)

The chart or graph is empty due to missing data

Acceptance Rate

The chart or graph is empty due to missing data

Locations

Loading map...



The CLEF Initiative (Conference and Labs of the Evaluation Forum, formerly known as Cross-Language Evaluation Forum) is a self-organized body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual and multimodal information with various levels of structure. CLEF promotes research and development by providing an infrastructure for:

   * multilingual and multimodal system testing, tuning and evaluation;
   *     investigation of the use of unstructured, semi-structured, highly-structured, and semantically enriched data in information access;
   *     creation of reusable test collections for benchmarking;
   *     exploration of new evaluation methodologies and innovative ways of using experimental data;
   *     discussion of results, comparison of approaches, exchange of ideas, and transfer of knowledge.