Connecting Rasa to Mycroft: A Guide

Originally published at: http://mycroft.ai/blog/connecting-rasa-to-mycroft-a-guide/

Mycroft recently attended an open source hackathon hosted by the Royal Bank of Canada. One team focused on integrating Mycroft with Rasa, and have graciously provided a write up of their efforts.

Introduction

In this post, we will be teaching you how to combine the functionalities of Mycroft and Rasa to leverage the strengths of both platforms in creating conversational applications.

Much thanks to Jamesmf for his code base. Be sure to check out his GitHub!

What is Rasa?

The Rasa AI open-source framework is an NLU package that goes hand-in-hand with Mycroft, as it allows developers to build conversational agents without handing over their data to a third-party service.

While effective at its job, Rasa lacks a speech to text translator for users to converse with its trained models, which Mycroft can safely provide. Using HTTP requests to streamline this integration process, we are further given the ability to modularize both the Mycroft and Rasa ends of the system. This connection is made even easier by taking advantage of Rasa’s built-in RESTful API endpoints.

By leveraging the unique strengths of both platforms, a strong Voice Assistant bot can be created, with Rasa taking charge of processing and response handling, while Mycroft handling the voice input and outputs.

The Configuration

Now that we understand the basics of Rasa and Mycroft, let’s try to configure a connection. This guide can be segmented into 2 steps:

  1. Hitting a Rasa API endpoint
  2. Using a Rasa Skill with Mycroft

Technical requirements

To properly configure a connection, we will require the following:

Hitting a Rasa API endpoint

Rasa provides RESTful API endpoints to access its services using HTTP requests. We can access this endpoint at the address: /webhooks/rest/webhook

The default port is 5005, but it can be changed using the -p command. For example, if you wanted to set the port to 8000:

rasa run -p 8000

Then, to use this endpoint, we can submit a POST request to the address with two parameters: a “message” and a “sender”. The message is what contains the actual dialog data, while the sender is a name given to your “user”. This sender can be used to distinguish between different people so that Rasa can track multiple dialogs at once.

data = requests.post(
    RASA_API, json = {
        "message" : msg, 
        "sender" : "mycroftUser"}
)

For more information on the other API endpoints available, such as the conversations state tracker, see the Rasa documentation.

Using a Rasa skill with Mycroft

On the Mycroft side, set up a Rasa Skill, which can be activated by some keyword. You can develop a skill following Mycroft’s introduction to developing a new Skill. In this case, we named our activation keywords “Chatwithrasa”, but you can name it whatever you wish.

Use a get response to get the text commands from the user and send it to the Rasa side by submitting a POST request, in the format outlined above, using the user’s input. If the HTTP request is successful, an array of JSON objects is returned. Each JSON object consists of the following keys, a “recipient_id”, a “text”, and an “image”, depending on what was returned. The “text” or “image” fields would be the responses you wish to get (note that the “image” field would only have a URI, not the actual file itself).

Now, you have a text output from a Rasa server, available for use on Mycroft!

The Source Code (Python 3.X)

# Mycroft skill that acts as an interface between a Rasa chatbot and a user, 
# allowing continuous voice dialog between the two
# Much thanks to Jamesmf for the code base and Kris Gesling for the technical advice

# The constructor of the skill, which calls MycroftSkill’s constructor
def initialize(self):
# Set the address of your Rasa’s REST endpoint
self.convoID = 1
self.RASA_API = “INSERT WEBHOOK API URL HERE”
self.messages = []

def query_rasa(self, prompt=None):
if self.conversation_active == False:
return
if prompt is None and len(self.messages) > 0:
prompt = self.messages[-1]
# Speak message to user and save the response
msg = self.get_response(prompt)
# If user doesn’t respond, quietly stop, allowing user to resume later
if msg is None:
return
# Else reset messages
self.messages = []
# Send post requests to said endpoint using the below format.
# “sender” is used to keep track of dialog streams for different users
data = requests.post(
self.RASA_API, json = {“message” : msg, “sender” : “user{self.convoID}”}
)
# A JSON Array Object is returned: each element has a user field along
# with a text, image, or other resource field signifying the output
# print(json.dumps(data.json(), indent=2))
for nextResponse in data.json():
if “text” in nextResponse:
self.messages.append(nextResponse[“text”])
# Output all but one of the Rasa dialogs
if len(self.messages) > 1:
for rasa_message in self.messages[:-1]:
print(rasa_message)

# Kills code when Rasa stop responding
if len(self.messages) == 0:
self.messages = [“no response from rasa”]
return
# Use the last dialog from Rasa to prompt for next input from the user
prompt = self.messages[-1]
# Allows a stream of user inputs by re-calling query_rasa recursively
# It will only stop when either user or Rasa stops providing data
return self.query_rasa(prompt)

@intent_handler(IntentBuilder(“StartChat”).require(“Chatwithrasa”))
def handle_talk_to_rasa_intent(self, message):
self.convoID+=1
self.conversation_active = True
prompt = “start”
self.query_rasa(prompt)

# Resume chat activator that would resume the conversation thread of a chat
@intent_handler(IntentBuilder(“ResumeChat”).require(“Resume”))
def handle_resume_chat(self, message):
self.conversation_active = True
self.query_rasa()

def stop(self):
self.conversation_active = False

Additional Steps:

In case you are having trouble locating your API endpoint when testing locally, make use of tools such as ngrok to expose your endpoint to other computers. For more info on tunneling ports to web endpoints using ngrok, check the resource here.

 

A big thanks to the whole team for this write up and their efforts at the RBC hackathon – Alex Zhang, Hongyi Chen, Athena Liu, Kevin Cui, Evan Kanter, and Mikhail Szugalew.

 

Cool Stuff!
Well-Done!

2 Likes