The Hardware of Deep Learning

From Nvidia's latest GPUs to Intel's Lake Crest and Google's TPUs, there is a plethora of options for training deepnets.  Should you build your own GPU rig or is it better to use the cloud?  Where should you train models?  What about inference at scale? 

This conference will focus on some best practices for deploying deep learning models into production on a variety of hardware and cloud platforms.  

Speakers will discuss topics like:

  • Recent benchmarks of popular models
  • Existing and new GPU architectures
  • Tensor Processing Unit (TPU)
  • Application-specific integrated circuit (ASIC)

DATE

May 18, 2017, 9:00a - 5:00p

 

LOCATION

San Francisco

 

Organized by

 
 

Talks

Jeremy Howard
fast.ai

Writing a GPU accelerated algorithm from scratch in Tensorflow — a clustering case study


Arshak Navruzyan
Platform.AI

Low-latency prediction in large computer-vision models