So, as you can tell from the title, I have very little experience in writing code. I don’t know Python or Java. But I am a fast learner/tinkerer and am committed to seeing this through. I am looking for any suggestions that might help me or anyone willing to put in a little time to answer a few questions when I run into problems. I figure this is the best place to start.
I have a Raspberry Pi 3 b running Debian Jessie (not lite) with MagicMirror and Mycroft both installed and operating without issues. Both start automatically on boot and operate independently. For the skill I am proposing to work, a user would need to have a number of things on their Pi.
First I started with a Raspberry Pi 3 b and a 16gb micro sd card.
then get the Debian Jessie image here:
http://downloads.raspberrypi.org/raspbian/images/raspbian-2017-07-05/
after using etcher to put the image on the sd card, into the pi and boot. At the linux terminal window:
-
sudo apt-get update
-
sudo apt-get upgrade
then from the command line or linux terminal window follow installation instructions for MagicMirror
https://github.com/MichMich/MagicMirror
Then once you follow all the instructions and have the MagicMirror working, then install the Mycroft-Core
files and directions here: https://github.com/MycroftAI/mycroft-core/tree/master
I did run into an issue with the libfann-dev library explained here with solution: Trying to install both Mycroft-Core and MagicMirror on the same PI
So… I thought I would try to build on an existing MagicMirror module called MMM-voice with the Hello-Lucy modifications to receive commands from a Mycroft skill.
That’s where I am now. The MMM-voice module for MagicMirror is installed and works if I stop Mycroft. It’s an issue with having one microphone. (Error) audio open error: Device or resource busy. Essentially two applications can’t share the microphone without coming up with a solution to that (possibly using dsnoop). But my thinking is just to have Mycroft use the microphone and build a skill that passes commands to the MMM-voice module (java).
I believe building the Mycroft Skill to do that will be rather straight forward using commands like the Hello-Lucy modifications. i.e.:
Hide Clock
Hide Email
Show Clock
Show Email
Swipe Left
Swipe Right
Show Page One
Hide Page One
Show Modules
Hide Modules
etc. etc.
The .voc files would probably be:
ActionKeyword.voc
containing:
Show
Display
Hide
Conceal
Modules.voc
containing:
Alarm
Clock
Email
News
Weather
etc. (putting all of the Module names in the modules.voc file)
sample1.intent.json
contains:
{
“utterance”: “Hide Clock”,
“intent_type”: “ActionKeywordIntent”,
“intent”: {
“ActionKeyword”: “HIDE”,
“Module”: “CLOCK”
}
}
As far as Dialog, Mycroft could respond "Hiding Clock"
So that’s kind of where I am. Two real big questions
-
will I need to build an intent for each command? or would there be a way to use an array?
-
What would be the best way to have Mycroft send the command “HIDE_CLOCK” to the MMM-voice module to be processed? socketmessage? or to reuse the code written by fewieden and bring that into the Mycroft environment?
Whatever advice the community is willing to provide would be extremely helpful.
Cheers!