Adversarial Machine Learning

Machine learning techniques were originally designed for environments in which the training and test data are assumed to be generated from the same (although possibly unknown) distribution and/or process. In the presence of intelligent and adaptive adversaries, however, this working hypothesis is likely to be violated.

Applying machine learning to use cases like fraud, anti-money laundering  and infosec presents a unique set of challenges:

- Little or no labeled data
- Non-stationary data distributions
- Model decay
- Counterfactual conditions

This event is entirely devoted to understanding how modern machine learning methods can be applied to these adversarial environments.  We will have hands-on workshops as well as talks by leading practitioners from industry and academia. 

 

DATE

Sep 10, 2016, 9:30a - 5p

 

LOCATION

Geekdom SF
620 Folsom St #100
San Francisco, CA 94107

 

Individual

795.00
Register

Group

2,500.00 4,000.00
Register

SCHEDULE

09:00 - 09:30 Registration

09:30 - 11:00 TensorFlow Workshop on Adversarial Examples (Illia)
11:00 - 12:00 AML/KYC for the Ripple Consensus Ledger (Gilles)

12:00 - 01:00 Lunch

01:00 - 01:45 Multi-armed Bandit Approach to Transaction Fraud at Stripe (Alyssa)
01:45 - 02:30 Assessing Merchant Fraud Risk at Square (Thomson)

02:30 - 03:00 Break

03:00 - 03:45 ML-based Fraud Detection for Fraud and Abuse (Jacob)
03:45 - 04:30 Learning from Large Bodies of Malware Samples (Zach)
04:30 - 05:00 Closing Remarks (Arshak)


Adversarial ML Topics Covered


Expert Speakers That Understand Adversarial ML Challenges


Sponsored by