Encoder-Decoder Model for Semantic Role Labeling

DOI

Abstract (Daza & Frank 2019): We propose a Cross-lingual Encoder-Decoder model that simultaneously translates and generates sentences with Semantic Role Labeling annotations in a resource-poor target language. Unlike annotation projection techniques, our model does not need parallel data during inference time. Our approach can be applied in monolingual, multilingual and cross-lingual settings and is able to produce dependency-based and span-based SRL annotations. We benchmark the labeling performance of our model in different monolingual and multilingual settings using well-known SRL datasets. We then train our model in a cross-lingual setting to generate new SRL labeled data. Finally, we measure the effectiveness of our method by using the generated data to augment the training basis for resource-poor languages and perform manual evaluation to show that it produces high-quality sentences and assigns accurate semantic role annotations. Our proposed architecture offers a flexible method for leveraging SRL data in multiple languages.

Identifier
DOI https://doi.org/10.11588/data/TOI9NQ
Metadata Access https://heidata.uni-heidelberg.de/oai?verb=GetRecord&metadataPrefix=oai_datacite&identifier=doi:10.11588/data/TOI9NQ
Provenance
Creator Daza, Angel
Publisher heiDATA
Contributor Daza, Angel
Publication Year 2020
Rights info:eu-repo/semantics/openAccess
OpenAccess true
Contact Daza, Angel (Leibniz Institute for the German Language)
Representation
Resource Type program source code; Dataset
Format text/markdown; application/zip
Size 8580; 44537016
Version 1.0
Discipline Humanities
Spatial Coverage Leibniz Institute for the German Language