TY - GEN
T1 - An Efficient Approach for Findings Document Similarity Using Optimized Word Mover’s Distance
AU - Dey, Atanu
AU - Jenamani, Mamata
AU - De, Arijit
PY - 2023/12/4
Y1 - 2023/12/4
N2 - We introduce Optimized Word Mover’s Distance (OWMD), a similarity function that compares two sentences based on their word embeddings. The method determines the degree of semantic similarity between two sentences considering their interdependent representations. Within a sentence, all the words may not be relevant for determining contextual similarity at the aspect level with another sentence. To account for this fact, we designed OWMD in two ways: first, it decreases system’s complexity by selecting words from the sentence pair according to a predefined set of dependency parsing criteria; Second, it applies the word mover’s distance (WMD) method to previously chosen words. When comparing the dissimilarity of two text sentences, the WMD method is used because it represents the minimal “journey time” required for the embedded words of one sentence to reach the embedded words of another sentence. Finally, adding an exponent function to the inverse of the OWMD dissimilarity score yields the resulting similarity score, called Optimized Word Mover’s Similarity (OWMS). Using STSb-Multi-MT dataset, the OWMS measure decreases MSE, RMSE, and MAD error rates by 66.66 %, 40.70 %, and 37.93 % respectively than previous approaches. Again, OWMS reduces MSE, RMSE, and MAD error rates on Semantic Textual Similarity (STS) dataset by 85.71 %, 62.32 %, and 60.17 % respectively. For STSb-Multi-MT and STS datasets, the suggested strategy reduces run-time complexity by 33.54 % and 49.43 %, respectively, compared to the best of existing approaches.
AB - We introduce Optimized Word Mover’s Distance (OWMD), a similarity function that compares two sentences based on their word embeddings. The method determines the degree of semantic similarity between two sentences considering their interdependent representations. Within a sentence, all the words may not be relevant for determining contextual similarity at the aspect level with another sentence. To account for this fact, we designed OWMD in two ways: first, it decreases system’s complexity by selecting words from the sentence pair according to a predefined set of dependency parsing criteria; Second, it applies the word mover’s distance (WMD) method to previously chosen words. When comparing the dissimilarity of two text sentences, the WMD method is used because it represents the minimal “journey time” required for the embedded words of one sentence to reach the embedded words of another sentence. Finally, adding an exponent function to the inverse of the OWMD dissimilarity score yields the resulting similarity score, called Optimized Word Mover’s Similarity (OWMS). Using STSb-Multi-MT dataset, the OWMS measure decreases MSE, RMSE, and MAD error rates by 66.66 %, 40.70 %, and 37.93 % respectively than previous approaches. Again, OWMS reduces MSE, RMSE, and MAD error rates on Semantic Textual Similarity (STS) dataset by 85.71 %, 62.32 %, and 60.17 % respectively. For STSb-Multi-MT and STS datasets, the suggested strategy reduces run-time complexity by 33.54 % and 49.43 %, respectively, compared to the best of existing approaches.
KW - Contextual similarity
KW - Document distance
KW - Document similarity
KW - NLP Optimization
KW - Word embedding
KW - Word mover’s distance
UR - http://www.scopus.com/inward/record.url?scp=85177873174&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/c62c974c-a83f-3ead-8d44-478445cfad92/
U2 - 10.1007/978-3-031-45170-6_1
DO - 10.1007/978-3-031-45170-6_1
M3 - Conference contribution
SN - 978-3-031-45169-0
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 3
EP - 11
BT - Lecture Notes in Computer Science
A2 - Maji, Pradipta
A2 - Pal, Nikhil R.
A2 - De, Rajat K.
A2 - Huang, Tingwen
A2 - Chaudhury, Santanu
ER -