Soft Cosine Measure: Capturing Term Similarity in the Bag of Words VSM

Warning

This publication doesn't include Institute of Computer Science. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

NOVOTNÝ Vít

Year of publication 2019
Type Appeared in Conference without Proceedings
MU Faculty or unit

Faculty of Informatics

Citation
Attached files
Description

Our work is a scientific poster that was presented at the ML Prague 2019 conference during February 22–24, 2019.

The standard bag-of-words vector space model (VSM) is efficient, and ubiquitous in information retrieval, but it underestimates the similarity of documents with the same meaning, but different terminology. To overcome this limitation, Sidorov et al. (2014) proposed the Soft Cosine Measure (SCM) that incorporates term similarity relations. Charlet and Damnati (2017) showed that the SCM using word embedding similarity is highly effective in question answering systems. However, the orthonormalization algorithm proposed by Sidorov et al. has an impractical time complexity of O(n^4), where n is the size of the vocabulary.

In our work, we prove a tighter lower worst-case time complexity bound of O(n^3). We also present an algorithm for computing the similarity between documents and we show that its worst-case time complexity is O(1) given realistic conditions. Lastly, we describe implementation in general-purpose vector databases such as Annoy, and Faiss and in the inverted indices of text search engines such as Apache Lucene, and ElasticSearch. Our results enable the deployment of the SCM in real-world information retrieval systems.

Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info