Manara - Qatar Research Repository
Browse

Self-Crowdsourcing Training for Relation Extraction

Download (1.72 MB)
Version 2 2024-09-22, 14:28
Version 1 2024-09-22, 12:13
conference contribution
revised on 2024-09-22, 14:27 and posted on 2024-09-22, 14:28 authored by Azad Abad, Moin Nabi, Alessandro Moschitti

One expensive step when defining crowdsourcing tasks is to define the examples and control questions for instructing the crowd workers. In this paper, we introduce a self-training strategy for crowdsourcing. The main idea is to use an automatic classifier, trained on weakly supervised data, to select examples associated with high confidence. These are used by our automatic agent to explain the task to crowd workers with a question answering approach. We compared our relation extraction system trained with data annotated (i) with distant supervision and (ii) by workers instructed with our approach. The analysis shows that our method relatively improves the relation extraction system by about 11% in F1.

Other Information

Published in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
License: http://creativecommons.org/licenses/by/4.0/
See conference contribution on publisher's website: https://dx.doi.org/10.18653/v1/p17-2082

Conference information: 55th Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 518–523 Vancouver, Canada, July 30 - August 4, 2017

Funding

European Commision (H2020-ICT-2014-2), CogNet 671625.

History

Language

  • English

Publisher

Association for Computational Linguistics

Publication Year

  • 2017

License statement

This Item is licensed under the Creative Commons Attribution 4.0 International License.

Institution affiliated with

  • Hamad Bin Khalifa University
  • Qatar Computing Research Institute - HBKU

Related Publications

Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). (2017). https://doi.org/10.18653/v1/p17-2