Negative Sampling for Learning Knowledge Graph Embeddings

DOI

Reimplementation of four KG factorization methods and six negative sampling methods.

Abstract
Knowledge graphs are large, useful, but incomplete knowledge repositories. They encode knowledge through entities and relations which define each other through the connective structure of the graph. This has inspired methods for the joint embedding of entities and relations in continuous low-dimensional vector spaces, that can be used to induce new edges in the graph, i.e., link prediction in knowledge graphs. Learning these representations relies on contrasting positive instances with negative ones. Knowledge graphs include only positive relation instances, leaving the door open for a variety of methods for selecting negative examples. In this paper we present an empirical study on the impact of negative sampling on the learned embeddings, assessed through the task of link prediction. We use state-of-the-art knowledge graph embeddings -- \rescal , TransE, DistMult and ComplEX -- and evaluate on benchmark datasets -- FB15k and WN18. We compare well known methods for negative sampling and additionally propose embedding based sampling methods. We note a marked difference in the impact of these sampling methods on the two datasets, with the "traditional" corrupting positives method leading to best results on WN18, while embedding based methods benefiting the task on FB15k.

Identifier
DOI https://doi.org/10.11588/data/YYULL2
Metadata Access https://heidata.uni-heidelberg.de/oai?verb=GetRecord&metadataPrefix=oai_datacite&identifier=doi:10.11588/data/YYULL2
Provenance
Creator Kotnis, Bhushan
Publisher heiDATA
Contributor Kotnis, Bhushan
Publication Year 2019
Rights info:eu-repo/semantics/openAccess
OpenAccess true
Contact Kotnis, Bhushan (NEC Laboratories Europe GmbH)
Representation
Resource Type program source code; Dataset
Format application/zip
Size 19883
Version 1.1
Discipline Humanities
Spatial Coverage Heidelberg University