Current state of offline support on raspberry pi3/deepspeech

I am working on a personal project where I need to be able to use mycroft offline, portable manner. My understanding that there’s no offline support for the raspberry pi 3 B+ as it does not have the power to run deepspeech offline. Is this still the case?

Yes, this is still the case.

The compute requirements for DeepSpeech are still to high to be met by a Raspberry Pi, even the new Pi 3B+. However, there is some progress happening on DeepSpeech for ARM-architecture hardware (such as RPi);

1 Like

I guess it looks like only the jetson tx2 can run cuda on an embedded platform. Is anyone working with this?

ARM boards with NPU’s around the corner, so …

… so you mean like the rockchip rk3399pro?

Yeah, I this case I mean the Rockchip;

But more are coming soon. That how it alsways goes with those new type of arm based dev boards.

But, not far from there, NPU’s will get produced with other interfaces to be included in present hardware such as USB3.0. either connected on the board SoC, but perhaps also as seperate “dongle” (like the Movidius).

What I would like to say is, that by the time Deepspeech is more mature and personal/home backends are around the corner, Deep Machine Learning (software) can already be done on low power consuming embedded devices.

I wont be suprised if the Raspberri Pi 4 or whatever will have a NPU…

1 Like

A bit more information available of that RK3399Pro SoC;

I must say, impressed with the moving object detection…

1 Like

I just installed Mycroft on a Nvidia Jetson TX2 device, seems good till the end of the script. I’ll let you know any further testing.