(click to copy)

Publication

Enriching Word Embeddings for Patent Retrieval with Global Context

The training and use of word embeddings for information retrieval has recently gained considerable attention, showing competitive performance across various domains.

In this study, we explore the use of word embeddings for patent retrieval, a challenging domain, especially for methods based on distributional semantics. We hypothesize that the previously reported limited effectiveness of semantic approaches, and in particular word embeddings (word2vec Skip-gram) in this domain, is due to inherent constraints on the (short) window context that is too narrow for the model to capture the full complexity of the patent domain.

To address this limitation, we jointly draw from local and global contexts for embedding learning. We do this in two ways: (1) adapting the Skip-gram model’s vectors using global retrofitting (2) filtering word similarities using global context. We measure patent retrieval performance using BM25 and LM Extended Translation models and observe significant improvements over three baselines.

 

S. Hofstätter, N. Rekabsaz, M. Lupu, C. Eickhoff, A. Hanbury, Enriching Word Embeddings for Patent Retrieval with Global Context, in: Proc. European Conference on Information Retrieval (ECIR), Springer LNCS 11437, Cologne, Germany (2019) 810–818

0 Pages 0 Press 0 News 0 Events 0 Projects 0 Publications 0 Person 0 Visualisation 0 Art

Signup

CSH Newsletter

Choose your preference
   
Data Protection*