The Hardware of Deep Learning

From Nvidia's latest GPUs to Intel's Lake Crest and Google's TPUs, there is a plethora of options for training deepnets.  Should you build your own GPU rig or is it better to use the cloud?  Where should you train models?  What about inference at scale? 

This conference, presented by Startup.ML and General Assembly, will focus on the best practices for deploying deep learning models into production on a variety of hardware and cloud platforms.  

Speakers will discuss topics like:

  • Distributed TensorFlow
  • GPU programming in python
  • Recent benchmarks of popular models
  • Existing and new GPU architectures
  • Tensor Processing Unit (TPU)

DATE

September 16, 2017, 9:00AM - 5:00PM

LOCATION

225 Bush St, San Francisco, CA 94104

 

Organized by

And