I have backed on the new Mycroft and want to use it ideally completely offline.
I have a central home automation server, that fetches all important data from sensors or apis that should be controled by mycroft.
But I want to avoid that my speech is sent to a server outside my controlled home-subnet.
So the question is, what is the easiest way to make mycroft work offline (without an extra server or with a self-hosted one.)
Hi there @skeltob, this is one of our most requested features - completely offline use. Unfortunately we donāt have good documentation on how to do this at the moment. I do know that some of our Community members have done this in the past, including @Jarbas_Ai and may have some guidance.
Iāve been waiting for a year on this and it still has not happened which is a complete shame considering the post about Google/Alexa and Siri recently: Alexa_Siri_Google Hidden command attacks
Itās definitely something we want to do @gregory.opera, however it requires a bit of work on our side. Our existing home.mycroft.ai platform is scaled to support tens of thousands of users, and runs across several virtual hosts - probably not all that usable as a local / personal backend. So we need to work on scaling that down.
The other layers to this problem are;
Speech to text - really this is the biggest blocker at the moment. Until we can get DeepSpeech to a point where it can run (or at least a vocabulary subset can run) on an embedded device, then weāre going to be stuck with cloud-based STT, irrespective of which cloud that runs on. There have been some substantive efforts by the DeepSpeech community toward this objective.
Skill support - most Skills need some form of internet connectivity as theyāre connecting to third party APIs.
Configuration settings - at the moment, configuration of Devices is done via Skill Settings at home.mycroft.ai so we would need to find a way to do configuration locally.
Iāve been waiting for a year on this and it still has not happened which is a complete shame considering the post about Google/Alexa and Siri recently:
Alexa_Siri_Google Hidden command attacks 18
If I understand it correctly, that attack could still work on a completely offline solution.
I have been running deepspeech locally with the āpretrainedā model on a separate computer in my house recently.
It was fairly easy to set up and to point Mycroft at it. The server does not have a GPU, so itās not as fast as it could be, but I think the gain in local network speed makes it not that different from the cloud service, which is kind of slow too, in my opinion. I will probably get a GPU based server at some point, but donāt expect a huge improvement in speed, because non-GPU is already usable for the short commands I use.
The big hit Iām taking is with accuracy. I have to speak slowly, right in front of the mycroft, and leave gaps of silence between words.
Iām currently starting to research ways to better train the local service. I have not gotten very far
My pipe dream would be for the mycroft community to be able to share and asimmilate incremental training gains without sharing any audio. Thatās way over my head at this point, though
If you manage to release something on Windows any time soon, Iām fairly certain that itād be really easy to do something with C# or VB.NET that uses the System.Speech.Recognition namespace in the Common Language Runtime. Its accuracy isnāt the best, but it is definitely a functional baseline. And it doesnāt seem to need pauses between each word like DeepSpeech apparently does.
I think well trained deepspeech doesnāt require spaces between words. Just the combination of my voice and the āpretrainedā model that Mozilla distributes seems to result in that
You can also contribute to project common voice. This is the data that Deepspeech is using in the end. In this way it will get at least used to your accent and tone of voice.
yeah, an offline version would be great. My thought is, even if its a āserverā program that runs on a GPU server in my house, then I link my imbeded devices to that server, and skills that need to reach out to the internet would, but most of the stuff would happen within my network would be great!
it running on the device itself would be cool, but at least for me, running a central server that those devices connect to within my house instead of the cloud would be awesome!
Just curious, https://home.mycroft.ai/#/deepspeech is just english, isnāt it? Because Iāve found some sentences in spanish and in german⦠and they were recognized not only correctly, but written in a perfect spanish (e.g: cómo estĆ”s, even with the proper accents!)
Buenos dias @malevolent! Great question. DeepSpeech is starting to provide translations for both German and Spanish. If youāre confident that the transcription is correct then youāre welcome to tag it
It would be interesting to tag or filter somehow those new languages, so people who can understand them can flag them out as correct and donāt create false positives⦠I mean, english people who doesnāt speak any other language, most probably will flag any other language as āNoā because they will think is not english. Even me, who speaks spanish, doubted if mark it out or not, because I didnāt see any language filter on the site. It would be a shame to have so many few sentences on those new languages marked as non-valid when the are really good, donāt you think?