downloaded and installed deepspeech and deepspeech-server
cloned the Mycroft git repo, checked out the tag for version 18.2.2, ran the dev-setup script
ran the start script and got Mycroft hearing me (after configuring my USB sound card correctly) and talk to me.
located (created by copying from the default) the config file in my home folder
read through the config and feel confused:
there seem to be multiple online services that mycroft seems to contact - what are they for? How do I need to configure them to use my local deepspeech (for text) and pocketsphinx (for wakewords)? What is the difference between a wakeword and a hotword?
although I added the pairing skill to the blacklisted skills, Mycroft still loads it and doesnāt stop talking about pairing and giving me pairing codes. How I can I turn that off?
Iād like to just get it to be able to tell me the time, from there, Iāll find my way, I think. Itās going to be dead slow, as I donāt have a fancy graphics card for deepspeech to use, but Iād like to try it out nonetheless.
Can someone help me with the config or point me to a resource that explains how to do it, please?
(additionally: Is there a preview for forum posts available? Not sure about nested listsā¦)
A Wake Word or a Hot Word are the same thing - they are a phrase that the Precise (which Mycroft now uses by default) Wake Word listener uses to flag that the next Utterance should be an Intent
Mycroft is designed to pair with home.mycroft.ai - if you want to remove this dependency, you will essentially need to decouple Mycroft from home.mycroft.ai. We donāt have any documentation on this but we know a couple people have done this before.
Mycroft contacts several online services - depending on STT configuration. If the STT is cloud based, then this would be one of them. Calls to home.mycroft.ai would be another. If a Fallback Intent is triggered, like Wolfram or Wikipedia, then that would be another.
@KathyReid Thank you very much, Kathy, for your reply.
Iād prefer using the forums, for others to read and maybe be able to follow along. Chat is a bit too volatile for this.
My non-paired Mycroft seems to use pocketsphinx per default for wakeword recognition. So probably that default youāre referring to is only set after pairing (or it has changed less than a week ago). Precise seems to use a custom wakeword recognition model that is created for each user individually āin the cloudā, right?
After browsing the code a bit, it looks like Iād need to trick Mycroft into believing it is_paired (or not asking if it is), and then make it load the skill settings from a local file (conveniently, thereās functionality for that) and ignore any online configurations that donāt exist anyway.
(not that Iād be able to do this easilyā¦ my Python is more beginner-level - so I may also wait for someone with more skills to step up).
That leaves setting the speech recognition server, where Iām still clueless. Might figure that out later.
Is it within the scope of the project to allow using it without the online home server in the future (youāre running a business, after all)? Or would that mean forking, and maintaining the fork, to be able to continue using the skills? Sorry if thatās too directā¦
It takes quite a bit of setup and skill to do this.
The long version:
I would like to build / add to a speech coding skill, however I live in the country and my satellite based internet has a horrible lag time.
So for Mozilla Deep Speech a GPU is needed for rapid response time in STT. I have the Nvidia GPU and I am collecting the parts now to build a computer capable of running fast enough to support this.
A simple bash search for pair:
~/Desktop/mycroft-core (dev)
$ grep -rnw ā./ā -e āpairā
./mycroft/client/speech/listener.py:177: return āpair my deviceā # phrase to start the pairing process
./mycroft/skills/main.py:175: āutterancesā: [āpair my deviceā],
./mycroft/tts/mimic_tts.py:151: for pair in pairs:
./mycroft/tts/mimic_tts.py:152: pho_dur = pair.split(":") # phoneme:duration
./mycroft/tts/init.py:128: pairs(list): Visime and timing pair
./README.md:62:By default, mycroft-core is configured to use Home. By saying āHey Mycroft, pair my deviceā (or any other request verbal request) you will be informed that your device needs to be paired. Mycroft will speak a 6-digit code which you can entered into the pairing page within the Mycroft Home site.
./scripts/mycroft-use.sh:222: echo āNOTE: This seems to be your first time switching to unstable. You will need to go to home-test.mycroft.ai to pair on unstable.ā
Run that query then investigate those responses. There are 4 configuration files and some code to jump over to avoid pairing or the attempt. (someone with more knowledge please correct me if that is incorrect)
It is not simply a setting that I have found yet to do what you want. I will update you when I find more information.