Manara - Qatar Research Repository
Browse

Don't understand a measure? Learn it: Structured Prediction for Coreference Resolution optimizing its measures

conference contribution
submitted on 2024-09-22, 08:19 and posted on 2024-09-22, 14:58 authored by Iryna Haponchyk, Alessandro Moschitti

An interesting aspect of structured prediction is the evaluation of an output structure against the gold standard. Especially in the loss-augmented setting, the need of finding the max-violating constraint has severely limited the expressivity of effective loss functions. In this paper, we trade off exact computation for enabling the use and study of more complex loss functions for coreference resolution. Most interestingly, we show that such functions can be (i) automatically learned also from controversial but commonly accepted coreference measures, e.g., MELA, and (ii) successfully used in learning algorithms. The accurate model comparison on the standard CoNLL-2012 setting shows the benefit of more expressive loss functions.

Other Information

Published in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
License: http://creativecommons.org/licenses/by/4.0/
See conference contribution on publisher's website: https://dx.doi.org/10.18653/v1/p17-1094

Conference information: 55th Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 518–523 Vancouver, Canada, July 30 - August 4, 2017

History

Language

  • English

Publisher

Association for Computational Linguistics

Publication Year

  • 2017

License statement

This Item is licensed under the Creative Commons Attribution 4.0 International License.

Institution affiliated with

  • Hamad Bin Khalifa University
  • Qatar Computing Research Institute - HBKU

Related Publications

Proceedings of the 55th Annual Meeting of the Association Computational Linguistics (Volume 1: Long Papers). (2017). https://doi.org/10.18653/v1/p17-1

Related Datasets

Pradhan et al. (2017). CoNLL-2012. Papers With Code Repository. https://paperswithcode.com/dataset/conll-2012-1