Sciweavers

ACL
2015

A Multitask Objective to Inject Lexical Contrast into Distributional Semantics

7 years 11 months ago
A Multitask Objective to Inject Lexical Contrast into Distributional Semantics
Distributional semantic models have trouble distinguishing strongly contrasting words (such as antonyms) from highly compatible ones (such as synonyms), because both kinds tend to occur in similar contexts in corpora. We introduce the multitask Lexical Contrast Model (mLCM), an extension of the effective Skip-gram method that optimizes semantic vectors on the joint tasks of predicting corpus contexts and making the representations of WordNet synonyms closer than that of matching WordNet antonyms. mLCM outperforms Skip-gram both on general semantic tasks and on synonym/antonym discrimination, even when no direct lexical contrast information about the test words is provided during training. mLCM also shows promising results on the task of learning a compositional negation operator mapping adjectives to their antonyms.
Nghia The Pham, Angeliki Lazaridou, Marco Baroni
Added 13 Apr 2016
Updated 13 Apr 2016
Type Journal
Year 2015
Where ACL
Authors Nghia The Pham, Angeliki Lazaridou, Marco Baroni
Comments (0)