Detecting events from other skills


I am working with picroft in a custom enclosure with hardware components that I would like to have respond to the actions of various skills. I would like to be able to do this in a general context with a variety of extant skills, but for the purposes of this question, take the example of turning on and off an led when a timer, set with the standard mycroft-timer skill, expires and is canceled respectively.

I have thought of a few solutions for this, the worst of which being modifying the code of the skill itself to include a line that switches the light. One step better would be modifying the skill’s code to send a message via the messagebus to an external script that handles hardware interactions.

The ideal solution, at least as I currently understand, would be to catch messages already sent by the skill in question with an external script that handles hardware interactions. The problem is that, short of shooting in the dark until something works, I do not know how to figure out what (if any) messages are created by methods in those skills.

In short, my question is: how can I check what messages are being sent by a specific skill and when?

Thank you for any help you can give!

1 Like

Hey Kabi,

Skills don’t necessarily emit messages to perform functionality. So there isn’t currently a simple way to achieve what you’re doing however I think it is a good suggestion.

With the Timer example we could emit a series of messages indicating the changing state of Timer notifications. Something like:

                          {"timer": new_timer }))
                          {"timer": expired_timer }))



Thanks for the reply!

That’s unfortunate that it’s not a standard feature—it feels like a good standard to adapt.

In that case, what would you recommend to be the most responsible way to change the code in an existing skill? Obviously, I could simply modify the code directly on my device, but that seems vulnerable to change unexpectedly. Should I create a branch of the skill’s associated GitHub repository? If so, how would I set my mycroft instance to recognize that specific branch as the one from which it should update?

Also, in the interest of better understanding how skills use the message bus, is there a standard established for how skills can be invoked by messages, even though there is no standard for the opposite? I ask because when I was trying to interface with skills, I found that I was sometimes able to call specific methods from a skill with a sensibly named message (e.g. self.bus.emit(Message('mycroft-spotify.forslund:list_devices')) would call the method list_devices() from the skill mycroft-spotify.forslund). Notably, the function list_devices() took message as an argument in addition to self.

Thank you for all your help!

Fork, clone, set upstream, (optional: Atom with git and remote-sync to mycroft instance to make coding process easy) push your updates to a branch of your fork and monitor changes made in the original skill and pull upstream/merge if necessary.

Your changes will be recognised and not overwritten

(And i definitely second the Feature Request)

1 Like

Hey SGee,

I’ve created a fork and made preliminary changes; the issue I am running into now is how to get my Mycroft instance to identify my fork as the repository to grab the skill from. Are you aware of how to do this?

Hey, if you change a Skill on your device, Mycroft will not update it. This means you won’t lose your work, but also means you won’t get any automatic updates from the upstream Skill. Each Skill in /opt/mycroft/skills is a git repo so you can manage them the same you would any other git project.

We welcome contributions to all our repositories, so if you add something that you think would be useful for others then please create a PR back to the upstream Skill.

The challenge on automating this for all Skills and intents is that we don’t know what the outcome of an intent is. Imagine a user says:

Set a timer for pasta

This will trigger the Timer Skill but it hasn’t yet created a valid timer as it doesn’t have a duration. The Skill will ask for this additional information and if provided will then set a timer. But if the user stops responding, cancels the request, or says something that cannot be parsed as a duration then no timer will be set.

One possibility is that we add a message for which intent is called and whether that intent was successful or not. You could then assume that if a “set_timer” intent reported success, then a timer is active.

The expiration isn’t triggered by an intent however so a message would probably need to be manually emitted by the Skill for that.

We do have a Skill API feature in the works that allows Skills to expose specific methods to other Skills. The Spotify list_devices method is a curious one, I’m not sure why that works. @forslund can you shed some light on that? I can’t see any event handlers setup for it.

Currently the work around to call another Skill is to emit an utterance as if the user had said something that you know will trigger the other Skill.

This is why i suggested a slightly more intricate way but ultimately granting a better workflow.

Another caveat: (i guess) Some skills are hooked to the backend to provide global configuration options. I think these are tied to a specific codebase. So unless this is rewritten, the skill will be useless.

@gez-mycroft that makes sense. Truly, I do not think pragmatically generating responses to intents makes much sense for any party, which makes this more of a “best practices suggestion” than a feature request. Designing skills to manually emit messages containing relevant information when when important things happen would make interfacing with skills extremely easy. This totally relies on an honor system, which is not such a satisfying solution, but with enough caring contributors, anything is possible. Either way, it’s food for thought when designing. In my opinion, the best thing about Mycroft (and the thing that drew me to it to begin with) was the potential to control the web of cause and effect at every level, and designing to allow users to control things as they choose will give Mycroft exponentially more potential in that regard.

A skill API would be a godsend. Allowing skills to expose methods would very neatly handle one direction of communication, and it would formalize the notion that skills do not exist in a vacuum.

1 Like

I don’t think I fully understood your original response until I read

were you suggesting using github for version control, but forgoing typical ‘git pull’ for updating the skill on the Mycroft device in favor of Atom?

In essence. My setup almost completely centers around Atom. There is not much to do on github itself, but to send the PR’s when the changes are made and it’s time to potentially merge into the upstream skill. (and even that isn’t necessary)

The fork gets cloned to my local machine, edited with Atom (Github activated, so you can forget about cmd) and then pushed onto mycroft (with Atom package remote-sync) with every change you save. This makes it easy to test your changes (live) right away and act appropriately.

Always work on a seperate branch to keep the default clean which then gets pushed to your github fork. with that and the upstream setup (i linked) you can easily fetch changes (locally) of the default branch - or other branches - and incorparate them if you chose to do so or simply push them to your fork.

When the PR is granted you can fetch default and send it to mycroft and everything works as normal. On the contrary you can stick with your changes with an eye on the changes made since. (Big) Changes are not that frequent to keep up with.

I’m at the moment tinkering with Jarbas Skill Node-Red / Hivemind (had node red running on my Homeserver prior which is dealing with my sensory stuff) In theory it may accomplish what you trying to do (essentially it hooks the message bus to a websocket communicating with node red) - but the other way around :stuck_out_tongue: (light → skill)

All intent handlers are registered on the messagebus and their message type is more or less mangled / demangled when registered and received. Technically any intent handler can be called but the message type and the actual parameter names may change at any time.

The list_devices method doesn’t use any data so it’s pretty easy to invoke since the message data doesn’t need any special format.

1 Like