Apps optimized for the Qualcomm Hexagon DSP can run faster and consume less power. Qualcomm Technologies is introducing even more support for Google’s machine learning framework called TensorFlow— it’s now optimized for the Hexagon 682 DSP, which is integrated into our premium tier Qualcomm Snapdragon 835 processor.
TensorFlow is Google’s machine learning software library, and it’s open source. This means developers are free to use TensorFlow to create their own models that add artificial intelligence to their mobile and desktop apps or cloud services. They can do so in a variety of ways to fully utilize Snapdragon 835’s heterogeneous cores, the best of which is to make full use of the DSP. Google uses TensorFlow for machine learning features across many products including Google Photos and Google Cloud Speech.
TensorFlow was designed to run on processing units inside of processors. Snapdragon processors integrate a CPU, GPU and many more technologies including a Digital Signal Processor, or DSP. Our DSP architecture is designed to process certain audio and video features more quickly and at lower power than a CPU or GPU, which is why it’s ideal to exploit its advantages. The chart below illustrates the benefits of apps optimized for our DSP.
Now after working closely with Google, TensorFlow will run on the DSP so apps can run faster and more efficiently without taxing the entire system.
Qualcomm Technologies also offers support for Tensorflow on Snapdragon processors via the Qualcomm Snapdragon Neural Processing Engine SDK, a heterogeneous software framework engineered to allow OEMs to run Tensorflow-trained models on the core (CPU, GPU, or DSP) that can deliver the best performance and power profile for the intended user experience.
The video below demonstrates the speed boost the DSP delivers to TensorFlow.
Here’s the breakdown:
Google wrote an app with TensorFlow and loaded it on two identical smartphones. The app can identify real-world objects, which are placed in front of the smartphone camera. The only difference is that one smartphone runs the app on the CPU and the other on the Hexagon DSP.
The app running on the DSP is able to recognize more images within a set amount of time, analyze and come to a conclusion faster and is more confident in its identification of the object, compared with running the app on the CPU.
You’ll notice the following three metrics:
- FPS– the DSP captures more images (frames-per-second/FPS), thus increasing the app’s accuracy.
- MS– the DSP is faster at identifying what the object is (lower milliseconds/MS)
- 0-1– the DSP has higher confidence in its conclusion on what the object is (0 being unknown and 1 confident.)