The Hardware of Deep Learning

From Nvidia's latest GPUs to Intel's Lake Crest and Google's TPUs, there is a plethora of options for training deepnets.  Should you build your own GPU rig or is it better to use the cloud?  Where should you train models?  What about inference at scale? 

This conference will focus on some best practices for deploying deep learning models into production on a variety of hardware and cloud platforms.  

Speakers will discuss topics like:

  • Recent benchmarks of popular models
  • Existing and new GPU architectures
  • Tensor Processing Unit (TPU)
  • Application-specific integrated circuit (ASIC)

DATE

August 19, 2017, 9:00AM - 5:00PM

LOCATION

225 Bush St, San Francisco, CA 94104

 

Organized by

Hosted by