Building Artificial Intelligence Products
Artificial intelligence systems are achieving breakthrough results in speech, image/video, text mining, etc. but deploying them into production presents a unique set of challenges:
- Ability to understand (and therefore trust) models
- Designing processes that combine human and artificial intelligence
- Collaboration between business experts and machine learning researchers
- Customer and partner privacy concerns
- Risks of autonomous decisions (i.e. quant fund meltdowns)
This workshop will focus on case studies and best practices for overcoming these challenges in real environments.
April 6, 2017, 9:30a - 5p
- Survey of machine learning / deep learning use cases
- Challenges of putting deep learning models into production
- Case studies
- AI startup funding and revenue models
- Enterprise assimilation of AI
- One-on-one mentoring
Arshak Navruzyan is a machine learning focused product manager. Arshak has been in technology leadership roles at Argyle Data, Alpine Data Labs, Endeca/Oracle. Arshak has delivered AI solutions for multi-billion dollar quantitative hedge funds, numerous venture funded startups and some of the largest telecoms in the world.
Juan Carlos Asensio is a business developer focused on technology ventures. Juan Carlos has been in charge of launching and fundraising for new business developments in technology across several startups and enterprises. Currently, Juan Carlos is a partner at Invariantes Fund, a software-focused venture capital firm investing in US and Latin American startups.
Recent Blog Posts
Training modern deepnets can take an inordinate amount of time even with the best GPU hardware available. Inception-3 on ImageNet 1000 using 8 NVIDIA Tesla K40s takes about 2 weeks (Google Research Blog).
One way to keep the predictive accuracy of a large network but reduce the number of its parameters, is a training paradigm called "distillation".
When creating a feature space for adversarial use cases like payment fraud, account takeover fraud and internal fraud, data scientists can rely on domain knowledge, intuition, personal experience and ultimately and if labeled data is available-variable selection.
Often the objective of constructing such feature spaces is to do anomaly / outlier detection by capturing enough attributes and aggregates that can delineate normal and extraordinary user behavior.
There seems to be very little overlap currently between the worlds of infosec and machine learning. If a data scientist attended Black Hat and a network security expert went to NIPS, they would be equally at a loss.
This is unfortunate because infosec can definitely benefit from a probabilistic approach but a significant amount of domain expertise is required in order to apply ML methods.
Financial institutions have a regulatory requirement to monitor account activity for anti-money laundering (AML). Regulators take the monitoring and reporting requirements very seriously as evidenced by a recent set of FinCEN fines.
One challenge with AML is that it rarely manifests as the activity of a single person, business, account, or a transaction. Therefore detection requires behavioral pattern analysis of transactions occurring over time and involving a set of (not obviously) related real-world entities.
Machine learning is being used in a variety of domains to restrict or prevent undesirable behaviors by hackers, fraudsters and even ordinary users. Algorithms deployed for fraud prevention, network security, anti-money laundering belong to the broad area of adversarial machine learning where instead of ML trying to learn the patterns of benevolent nature, it is confronted with a malicious adversary that is looking for opportunities to exploit loopholes and weaknesses for personal gain.