Sciweavers

ML
2010
ACM

Mining adversarial patterns via regularized loss minimization

12 years 11 months ago
Mining adversarial patterns via regularized loss minimization
Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However, in several adversarial settings, the test set is deliberately constructed in order to increase the error rates of the classifier. A prominent example is spam email where words are transformed to get around word based features embedded in a spam filter. In this paper we model the interaction between a data miner and an adversary as a Stackelberg game with convex loss functions. We solve for the Nash equilibrium which is a pair of strategies (classifier weights, data transformations) from which there is no incentive for either the data miner or the adversary to deviate. Experiments on synthetic and real data demonstrate that the Nash equilibrium solution leads to solutions which are more robust to subsequent manipulation of data and also provide interesting insights about both the data miner and the adversary. Keywords Stackelberg game
Wei Liu, Sanjay Chawla
Added 20 May 2011
Updated 20 May 2011
Type Journal
Year 2010
Where ML
Authors Wei Liu, Sanjay Chawla
Comments (0)