Dialogue Evaluation
We are pleased to announce that now in the competition section you can find the description of the Dialogue Evaluation-2023 competitions.
Dialogue Evaluation is a special «Dialogue» section that combines shared tasks for various practical and research NLP problems for the Russian language. Although Dialogue Evaluation has a competitive component, science is our priority. Our aim is to develop uniform principles of "evaluation" in order to objectively assess text analysis systems for different tasks and set technical benchmarks (SOTA-solutions). Contact us at dialogue_evaluation@dialog-21.ru
If you are a DE participant and you would like to publish your paper describing your solution in Conference Proceedings “Computational Linguistics and Intellectual Technologies” (SCOPUS), please check our Publishing Policy. Participation in a DE shared task does not guarantee a publication. All the papers including those based on DE solutions are peer reviewed and may be rejected.