Servo for Visual Skill Building and Conversational Interaction

Hey people. We’re exploring a partnership with a new OSS project called Servo Labs which is trying to build a visual skill writing interface for multiple NLP engines and cross-channel deployment.

Check out a demo video.
See the code on Github.

Anyone have any thoughts on this over Node red, Blockly, etc?

I’ve also got a Mattermost channel here.

Hopefully, the guys from Servo will be poking around here soon.

Will this be an application for conversational context or am I on the wrong track…?

By my understanding it will do both. Part of my excitement is making skill dev as a whole more accessible, but yes, a lot of Servo’s focus is on conversational context and context switching within the interaction.

Hi there, I am Sagi, a co-founder at servolabs. @Dominik per your question, we do handle context. Servo is built like humans to handle multiple conversations at the same time and switch between them when a context change is detected. each subconversation is represented by a subtree (the green nodes on the video). Each subtree holds a different intent and entities (but can use the same entities as well). based on their combination, we point the assistant to the right conversation and even deeper in a conversation when needed. Once you learn the basics, it becomes very intuitive.

One thing that I didn’t mention. You can connect any NLU model to it. At the moment we use, Luis and Rasa. Once you build a model, you can export the jason tree and share it with other developers . Looking forward to see some cool skills.

how does mycroft fit into this? will you support adapt/padatious intents, create full skills, pipe information around? or would we need to somehow parse a json tree and build a skill out of it?

i made a node-red integration skill, i simply connected the messagebus and have node red handling the whole logic externally and passing information back and forth, i wouldn’t consider this support for mycroft, but it works

i personally don’t like flows that much in regards to logic, but for complex dialogs and context i see plenty potential!


You would use servolabs similar to nodered. the main difference is in the context layer provided by servo that is running on the behavior trees. 2nd, behavior trees are a tested paradigm for a complex skill as it is a leading tool in modern games, combining both, reduces the amount of complexity dramatically. Servo can also handle a conversation in several languages on the same logic flow, when you switch a language it, redirect data to a different NLU model and prompt the correct language within a node. Please check the website to learn more and see if it fits your specific need :). for more questions we will be happy to help and support.

1 Like

@JarbasAl check this out

Servo, Node-Red and Blockly seem to be targeting three different niches/users.

Servo for the enterprise, Node-Red for Maker with minimal coding experience, and Blocky for those wanting to learn how to code.

Integrate all things.

I tried to give the demo a go, but it doesn’t work in Firefox.
It looks like it is trying to connect to Facebook, which makes me immediately suspicious.

1 Like

Hi, Lior from Servo here, sorry for the delay. Seems you tried it on an isolated offline machine?
FB is needed for login, on some cases. But for now we have removed it, so it should work anyways. However, the tutorials might not work with an internet connection since they rely on cloud NLU (

thanks for the update!