How can I match two skills from one utterance/sentence?

I’m a little confused on how to combine two skills together. For example- I want one skill to succeed, but I want Mycroft to search the utterance again to see it matches another skill as well and answer the 2nd skill as well.

I was looking at this skill, but tbh the code is a little confusing and I couldn’t find much information regarding the message bus (perhaps, I’m not looking properly). Would anyone have a bit of an idea for this kinda problem? Thanks

Hi there,

I’m wondering if you have any examples to show the behaviour you are trying to achieve? What would the user say, and then what are you wanting the two Mycroft skills to respond with?

There are a few ways to achieve this, some of which were outlined in the other thread, but it depends on what you are really aiming to do.

Basically I just want to check one intent/sentence to match another skill, inside my own skill.

Yes this makes sense but unfortunately is not yet possible, particularly in this broad context of “Any valid utterance and any other valid utterance”.

One challenge is being able to extract out multiple intents and the many possibilities that come with that. It seems simple at first - split the utterance on “and” then handle both halves as independent queries. However consider these examples:

  1. “Start a timer for 5 minutes and another for 20 minutes”
    In this case we need the context from one intent to properly parse the second half.

  2. “Play Piano and I by Alicia Keys”
    This time there is only one intent. Which shows that we need to check both the whole utterance as well as components.

These are just two quick examples off the top of my head but we need to look at what all of those edge cases might be. We’re also working to speed up Mycroft’s response times and checking multiple possible utterances simultaneously adds greater complexity to that.

How would you rate your programming experience, is it something you are keen to explore, or more looking for an existing solution? There might be others reading along interested in helping out?

If you’re not a programmer, thinking through what some of these challenging phrases might be is also extremely helpful for those who do have the coding skills.


Like mentioned in another thread, would it be possible to send an utterance to message bus and then match it to all the intents?

In that example we are essentially just issuing a new utterance to the messagebus, and Mycroft would treat that like a user had spoken it. This then goes through the normal intent handling process and will get actioned by whichever Skill has the highest match.

Unfortunately, the functionality that I think you are looking for would probably require a significant restructure of at least the intent handling process.

As an example, the Timer Skill doesn’t know what to do with “Play Alicia Keys on Spotify and start a time for 5 minutes”, so we would need to do some pre-processing of the utterance to realise that there are actually two intents there. We also don’t want 30 Skills to all fire up and try to respond to every utterance as it will consume significant system resources and be extremely slow.

This is functionality that we are looking at for the future, however not a priority for the team at the moment.

I’ve integrated mycroft into my robot and I want to be able to perform some specific actions like “Go to the twins room and wake them up”. I have a skill created for “go to the twins room” and the robot navigates there. If my skill (or something else) parses out the “wake them up” part, is it possible to inject that already recognized speech into the messagebus (like the CLI does) so that it then gets processed by a different skill?

Simple answer is Yes. The “wake them up” part will show up as an “message.utterance_remainder()” in your skill then you could pass this to another skill using the messagebus. Something like…

def send_message(message, host=socket_host, port=socket_port, path="/core", scheme="tcp"):
    payload = pickle.dumps({
        "type": "recognizer_loop:utterance",
        "context": "",
        "data": {
            "utterances": [message]
    url = URL_TEMPLATE.format(scheme=scheme, host=host, port=str(port), path=path)
    ws = create_connection(url)

where the “message” portion passed to this routine is the remainder of your request.
This is untested but I have done similar in other skills.

1 Like

Thanks a bunch, that will certainly save me time! When the skill receives a “callback” from ROS, I’ll call that and inject the remainder. Seems simple enough. I’ll obviously need to write a new skill to do the waking up part… thinking an relay-activated foghorn.

1 Like

Note: Update to Update down at bottom…

I’m finally getting around to this and I’m finding that utterance_remainder is returning the entire utterance. Maybe the routines I’m using is not compatible with utterance_remainder (still learning mycroft)?

This is my handler:

 def handle_turn(self, message):
    position ='position')
    timer ='timer')
    if timer is not None:
        position = position + ":" + timer  # the ':' indicates a timed turn ("clockwise", "5")
        direction ='direction')
        if direction is not None:
            position = position + "#" + direction # the "#" indicates a turn to a position (degrees) in a particular direction ("30", "clockwise")
    remainder = message.utterance_remainder()
    if remainder is not None:
        position = position + "|" + remainder
    ros_request = position

and this is turn.intent:

turn {position}
turn {position} for {timer} seconds
turn {position} degrees {direction}
rotate {position} for {timer} seconds
rotate {position} degrees {direction}

If I issue “turn around and say hello” what gets sent to the robot via the turnto_pub.publish() command is:

around and say hello | turn around and say hello

So it looks like {position} is getting filled with ‘around and say hello’. Not sure how to handle that.

Update to Update
So on the whim that padatious intents (which I believe the above is) don’t work with utterance_remainder(), I created a simple Adapt intent (I think) as follows:

    def handle_turn(self, message):
       angle ='angle')
       remainder = message.utterance_remainder()

My turn.voc file was:


and my angle.voc file was:


When I issue a ‘turn around and say hello’, I get no response, but if I issue a ‘turn around then say hello’, it parses the turn and around and spits out ‘then say hello’ as the remainder. So Adapt seems to be parsing out ‘around and say hello’ as the angle to match against the angle.voc file and fails to find a match, but when there’s a ‘then’ it parses just the ‘around’ as angle and spits out ‘then say hello’ as the remainder.

The issue i see is that your first intent, turn {position}, is true for all others and is therefore excluding all the others. Adapt also has a “one_of” and “optional” for keywords.

Thanks for the response. They all do work (edit: went back to using the original padatious intent). I can do a “turn 90 degrees clockwise” and it works as well as “turn around”. To overcome the issue with the remainder, I’m went ahead and parsed it out within the skill. Basically, I’m performing either a .split(" and “,1) or .split(” then ",1) on what comes in on the last entity and treating everything after the split as the remainder. It’s working and the robot goes into the twins room and ‘wakes them up’.

As for injecting the remainder into the message bus, this is what I ended up having to use. It seemed to not want the byte stream that pickle produced.

def send_message(self, message, host="", port=8181, path="/core", scheme="tcp"):
    payload = '{"type":"recognizer_loop:utterance","context":"","data":{"utterances":["'+message+'"]}}'
    url = 'ws://'
    ws = create_connection(url)