Adversarial Machine Learning

Machine learning techniques were originally designed for environments in which the training and test data are assumed to be generated from the same (although possibly unknown) distribution and/or process. In the presence of intelligent and adaptive adversaries, however, this working hypothesis is likely to be violated.

Applying machine learning to use cases like fraud, anti-money laundering  and infosec presents a unique set of challenges:

- Little or no labeled data
- Non-stationary data distributions
- Model decay
- Counterfactual conditions

This event is entirely devoted to understanding how modern machine learning methods can be applied to these adversarial environments.  We will have hands-on workshops as well as talks by leading practitioners from industry and academia. 



Sep 10, 2016, 9:30a - 5p



Geekdom SF
620 Folsom St #100
San Francisco, CA 94107


09:00 - 09:30 Registration

09:30 - 10:15  Tackling Bitcoin's Fraud Problems (Soups)
10:15 - 11:45  TensorFlow Workshop on Adversarial Examples (Illia)
11:45 - 12:15  Learning from Large Bodies of Malware (Zach)

12:15 - 01:00 Lunch

01:00 - 01:45  Dealing with Counterfactual Model Decisions (Alyssa)
01:45 - 02:30  Tackling the Full Spectrum of Threats Confronting the Enterprise (Arshak)

02:30 - 03:00 Break

03:00 - 03:45  Assessing Merchant Fraud Risk at Square (Thomson)
04:00 - 05:00  ML-based Fraud Detection for Fraud and Abuse (Jacob)

Adversarial ML Topics Covered

Expert Speakers That Understand Adversarial ML Challenges

Sponsored by