How would I go about using Mycroft without internet access/use of cloud services?

What I have so far:

  • downloaded and installed deepspeech and deepspeech-server
  • cloned the Mycroft git repo, checked out the tag for version 18.2.2, ran the dev-setup script
  • ran the start script and got Mycroft hearing me (after configuring my USB sound card correctly) and talk to me.
  • located (created by copying from the default) the config file in my home folder
  • read through the config and feel confused:
    • there seem to be multiple online services that mycroft seems to contact - what are they for? How do I need to configure them to use my local deepspeech (for text) and pocketsphinx (for wakewords)? What is the difference between a wakeword and a hotword?
    • although I added the pairing skill to the blacklisted skills, Mycroft still loads it and doesnā€™t stop talking about pairing and giving me pairing codes. How I can I turn that off?

Iā€™d like to just get it to be able to tell me the time, from there, Iā€™ll find my way, I think. Itā€™s going to be dead slow, as I donā€™t have a fancy graphics card for deepspeech to use, but Iā€™d like to try it out nonetheless.

Can someone help me with the config or point me to a resource that explains how to do it, please?

(additionally: Is there a preview for forum posts available? Not sure about nested listsā€¦)

1 Like

So, a couple of pieces here;

  • A Wake Word or a Hot Word are the same thing - they are a phrase that the Precise (which Mycroft now uses by default) Wake Word listener uses to flag that the next Utterance should be an Intent

  • Mycroft is designed to pair with home.mycroft.ai - if you want to remove this dependency, you will essentially need to decouple Mycroft from home.mycroft.ai. We donā€™t have any documentation on this but we know a couple people have done this before.

  • Mycroft contacts several online services - depending on STT configuration. If the STT is cloud based, then this would be one of them. Calls to home.mycroft.ai would be another. If a Fallback Intent is triggered, like Wolfram or Wikipedia, then that would be another.

Might be quicker for us to assist via Chat - https://chat.mycroft.ai

Best, Kathy

@KathyReid Thank you very much, Kathy, for your reply.

Iā€™d prefer using the forums, for others to read and maybe be able to follow along. Chat is a bit too volatile for this.

My non-paired Mycroft seems to use pocketsphinx per default for wakeword recognition. So probably that default youā€™re referring to is only set after pairing (or it has changed less than a week ago). Precise seems to use a custom wakeword recognition model that is created for each user individually ā€˜in the cloudā€™, right?

After browsing the code a bit, it looks like Iā€™d need to trick Mycroft into believing it is_paired (or not asking if it is), and then make it load the skill settings from a local file (conveniently, thereā€™s functionality for that) and ignore any online configurations that donā€™t exist anyway.
(not that Iā€™d be able to do this easilyā€¦ my Python is more beginner-level - so I may also wait for someone with more skills to step up).

That leaves setting the speech recognition server, where Iā€™m still clueless. Might figure that out later.

Is it within the scope of the project to allow using it without the online home server in the future (youā€™re running a business, after all)? Or would that mean forking, and maintaining the fork, to be able to continue using the skills? Sorry if thatā€™s too directā€¦

Irene,

The short version:

It takes quite a bit of setup and skill to do this.

The long version:

I would like to build / add to a speech coding skill, however I live in the country and my satellite based internet has a horrible lag time. :frowning:

So for Mozilla Deep Speech a GPU is needed for rapid response time in STT. I have the Nvidia GPU and I am collecting the parts now to build a computer capable of running fast enough to support this.

A simple bash search for pair:
~/Desktop/mycroft-core (dev)
$ grep -rnw ā€˜./ā€™ -e ā€˜pairā€™
./mycroft/client/speech/listener.py:177: return ā€œpair my deviceā€ # phrase to start the pairing process
./mycroft/skills/main.py:175: ā€˜utterancesā€™: [ā€œpair my deviceā€],
./mycroft/tts/mimic_tts.py:151: for pair in pairs:
./mycroft/tts/mimic_tts.py:152: pho_dur = pair.split(":") # phoneme:duration
./mycroft/tts/init.py:128: pairs(list): Visime and timing pair
./README.md:62:By default, mycroft-core is configured to use Home. By saying ā€œHey Mycroft, pair my deviceā€ (or any other request verbal request) you will be informed that your device needs to be paired. Mycroft will speak a 6-digit code which you can entered into the pairing page within the Mycroft Home site.
./scripts/mycroft-use.sh:222: echo ā€œNOTE: This seems to be your first time switching to unstable. You will need to go to home-test.mycroft.ai to pair on unstable.ā€

Run that query then investigate those responses. There are 4 configuration files and some code to jump over to avoid pairing or the attempt. (someone with more knowledge please correct me if that is incorrect)

It is not simply a setting that I have found yet to do what you want. I will update you when I find more information.

C

1 Like