Dialogue Evaluation

Dear friends, Dialogue Evaluation-2023 paper submission is over. Thank you all for participating!

We look forward to seeing you at the Dialogue Conference in June! You can find detailed information here.


Dialogue Evaluation is a special «Dialogue» section that combines shared tasks for various practical and research NLP problems for the Russian language. Although Dialogue Evaluation has a competitive component, science is our priority. Our aim is to develop uniform principles of "evaluation" in order to objectively assess text analysis systems for different tasks and set technical benchmarks (SOTA-solutions). Contact us at dialogue.secretary@gmail.com.

If you are a DE participant and you would like to publish your paper describing your solution in Conference Proceedings “Computational Linguistics and Intellectual Technologies” (SCOPUS), please check our Publishing Policy. Participation in a DE shared task does not guarantee a publication. All the papers including those based on DE solutions are peer reviewed and may be rejected.

PUBLISHING POLICY