Using Mycroft without Mycroft's speech recognition

Is there a way to send commands to Mycroft via HTTP queries that contain the command in natural language and the identifier of that language, something like:

This way there is no need for Mycroft’s own speech recognition / audio recording capability, and one would just use its text-to-speech, dialog management and query interpretation/execution capabilities. One could even disconnect Mycroft’s microphone (for increased privacy).

This makes the integration of certain custom solutions simpler. E.g. tapping on the smart watch triggers speech recognition on the watch + a custom app on the watch converts the transcription into the above HTTP query. This can improve usability because there is no need for the wake up word + speech recognition is more reliable because the positioning of the mouth vs the recorder is always constant.

Some custom solutions would prefer the query to result in some response (as a JSON structure), e.g.

would deliver:

{ response: "it's sunny", lang: "en-US" }

Ideally the user should be able to define custom JSON templates which Mycroft would fill with the content of the “response” and “lang”.

This way both the STT and TTS are decoupled from the system.

It seems like it would be trivial to implement, so maybe something like this exists already?

1 Like

On picroft there’s say_to_mycroft which does what you’re looking for. It’s basically a wrapper for (

Thanks! I’ve made a hackish wrapper around the script, that more or less does what I need:

One desired extension would be to get the response from Mycroft as plain text (the string that ends up in the log under “SpeechClient - INFO - Speak”) to be able to play it with a non-Mycroft TTS.

Btw, a bug report: when I ask: “How old is Roger Federer”, then the response (in version v0.8.19) is:
“Sorry, I don’t know how is Rodger Federer old”.

File an issue on github for bugs. Include copious documentation and logs if possible.