AI with ML for IoT Workshop (Part 3)

AGENDA

Questions

a.) Given that my IoT endpoints will be limited in capabilities due to size, cost, and the power requirements, how can companion computing that is either embedded in the larger system or in a companion gateway help me?

b.) How can knowledge that is created on a given device be exported and used in many other locations?

c.) How will machine learning (with a small “m.l.”) affect behaviour at the edge?

Hand on Exercise:

Signal Processing with 

sensors have to be connected to

a smartphone or to the cloud

to perform any useful classification

The IoT Problem

Sensor technology needs to be:

  • autonomous
  • not consume bandwidth/power when transmitting unfiltered data

What Do We Want?

IoT Solution #1

TCID

Train on the Cloud, Infer on the (Embedded) Device

Embedded Devices?

 devices for specific purposes included within another object

  • accelerometer + shoes 

  • POS/ATM machines

  • a heart rate monitor + wristwatch + mobile

  • cable box

train your machine learning model + dataset

on a cloud based GPU,TPU, etc

store the training on your embedded device's

micro-controller/SOM/COM

IoT Solution #2

Rely on your   end-point node for the heavy lifting

Edge Computing

The IoT device becomes autonomous

any addressable device on a network

Node?

e.g., an artificial neural network

connects to the LPWAN and accepts communications back and forth across the network

Endpoint Node

compute moves closer to the source of data

Benefits

less data flows between your datacenter and the public cloud

The Ultimate IoT Solution

TCID and

Edge Computing

TCID-EC

Train in the Cloud

Infer (offline) on the device

Update & share the model with other devices via the edge/gateway

aka Embedded Devices

for IoT

What if we add AI to TCID-EC aka Embedded IoT Engineering?

Engineering embedded IoT devices with AI

Let's Go Nvidia

Photo Credits:  http://www.silicon.co.uk/e-innovation/nvidia-jetson-tx2-206831

NVIDIA GPUs

available in the cloud services from Amazon, IBM, and Microsoft

or on premise:

desktops, notebooks, servers, and supercomputers around the world

GPU-accelerated Cloud Services

Amazon Web Services 

G2 GPU instance

Jetson TX1 Developer Kit 

a full-featured development platform for visual computing embedded applications

Embedded Applications

https://developer.nvidia.com/deep-learning-getting-started

Jetson TX2

256 Cuda Cores

8GB Memory

32 GB storage

Ethernet, WLAN, Bluetooth! 

Too Expensive @ $500??

Begin Exercise

Nvidia Digits

  • Design, train and visualize deep neural networks 
  • Download pre-trained models such as AlexNet, GoogLeNet and LeNet
  • Schedule, monitor, and manage neural network training jobs, 
  • Import images and sources

for image classification, segmentation and object detection

DIGITS is available as a free download to the members of the NVIDIA Developer Program. If you are not already a member, clicking “Download” will ask you join the program.

DIGITS is available as a Amazon Machine Image (AMI) for on-demand usage. Get started instantly by clicking the button below. Visit the GPU-accelerated cloud images  to learn more. (DIGITS 5 AMI coming soon)

However....

Google TPU Pods

  “One of our new large-scale translation models used to take a full day to

https://www.geekwire.com/2017/google-launches-powerful-new-cloud-tpu-machine-learning-chips-google-cloud-platform/

train on 32 of the best commercially-available GPUs—now it trains to the same accuracy in an afternoon using just one eighth of a TPU pod,”

References

Deep Learning Classes and Courses

NVIDIA Deep Learning Institute offers self-paced training and instructor-led workshops

CS229: Machine Learning by Andrew Ng (Baidu)

Deep Learning at Oxford by Nando de Freitas (University of Oxford)

Neural Networks for Machine Learning by Geoffrey Hinton (Google, University of Toronto)

Deep Learning for Computer Vision by Rob Fergus (Facebook, NYU)

Learning From Data by Yasser Abu-Mostafa (Caltech)

Deep Learning for Natural Language Processing (Stanford)

Deep Learning Posts on the ParallelForAll technical blog

End of Part 3