Shahrad Payandeh ([email protected])
Using Artificial Neural Network (ANN) on machines, to make them to reach similar capability to the human brain can do is a popular topic and can be seen in different fields. Beside finding the best NN (Neural Network) and training database, another challenge is to implement it on embedded devices while optimizing performance and power efficiency. Using cloud computing is not an option always, especially when device doesn’t have connectivity. In that case, we need a platform that could do signal pre-processing and execute NN in real-time, with the lowest power consumption possible; especially when a device operates on a battery.
In previous blogs, which can be found on www2.intrinsyc.com we went through few examples on how to use Qualcomm® Snapdragon™ Platforms for NN use cases. We saw that by using different tools (like python scripts), we can train a network with databases (in Caffe and Tensorflow) formats and then using the Qualcomm® Snapdragon™ Neural Processing Engine (NPE) software development kit (SDK) to convert that network for Snapdragon platforms. In this blog, we mainly focus on using Matlab and the ONNX format.
Qualcomm® Snapdragon™ Platforms and the Qualcomm® Snapdragon™ Neural Processing Engine (NPE) software development kit (SDK) is an outstanding choice to create a customized neural network on low-power and small-footprint devices. The Snapdragon NPE was created to give developers the tools to easily migrate intelligence from the cloud to edge devices.
The Snapdragon NPE provides developers with software tools to accelerate deep neural network workloads on mobile and other edge Internet of Things (IoT) devices powered by Snapdragon processors. Developers can choose the optimal Snapdragon core for the desired user experience – Qualcomm® Kryo™ CPU, Qualcomm® Adreno™ GPU or Qualcomm® Hexagon™ DSP.
In this article, we explore developing and implementing NN on Snapdragon platforms using Matlab tools, and mainly focus on ONNX format. Also, we investigate how Snapdragon platforms can help us to reduce power and processing time, by using the optimal Snapdragon core, and tools that are provided by the SNPE SDK.
Design and Develop Simple DNN
We start by going through steps on designing and training a Deep Neural Network (DNN), using Matlab and port that design for Snapdragon and look for the best subsystem on Snapdragon to do the job.
Handwritten Digit Recognition System
Let’s start with handwritten digit recognition system using DNN. One of the major differences between this network and the (Audio Digit Recognition System) is that this system doesn’t have any pre-processing on an input signal. Snapdragon platforms, with their heterogeneous computing architecture, have powerful engines for audio and image processing using Digital Signal Processors (DSPs) and Graphics Processing Unit (GPU)..
For developing and training part of this network we use Matlab. This network is a three layer convolution-based network. We also use the handwritten digits database that comes with Matlab (It is the same as MNIST database. for the source of that database, please check Matlab documentation)
So, let’s check the script
- Here we select the database
[XTrain,YTrain] = digitTrain4DArrayData;
[XValidation,YValidation] = digitTest4DArrayData;
- Now setting the layers
layers = [ imageInputLayer([28 28 1],’Name’,’input’, ‘Normalization’, ‘none’)
- and create the network
options = trainingOptions(‘sgdm’,…
- And train it (for detail on training process please check Matlab documentation)