Self Hosted Server

I have been searching for many possible options to do an AI in my house. There are so many to use obviously and many that are being developed. I have been looking for a completely open platform like Mycroft. Now the only thing I doubt is being worked on that id like to see implemented if possible is that id like to see a Mycroft personal hosted server. I’d like to have my own personal Jarvis with my own variation of modifications while at the same time saving my data in my own cloud. Would it be possible to get a self-hosted server that I can hook clients to like my car and my phone and even my computers? Then maybe that server could be updated from Mycroft’s` server directly. Idk I guess I’m posting this as a suggestion. I see my future house as having Mycroft capability on the walls in every aspect of my house as well as my car. The only way I see this working out is if I can obtain an AI server to hook individual clients to directly. It would make it so much easier for me to do implimenation. As well as the fact I’d really like to retain all my own information. The only thing that hs held me back from preordering a device is because I feel like I may lose out on such a feature just like I did on my Echo and other sources that I’ve tested. I guess just keep it in mind, I can’t wait to see where the project ends up going. thanks, guys!

This is exactly my question/area of interest as well.

Yeah. I don’t feel comfortable to open my house to other company. I would like to host my own data in my own server as well.

Yes, I think this is an important topic. Having the ability to have my services on my server (for me, my company, family) is very important.

Are there any plans to go into that direction?

The only types of things i have read are their statements about being concerned with security. They talked about how its as important to them as it is to us. But one of the biggest reasons i was considering jasper over this project is due to the fact i could host it in house and never have to hook up to a server if i do not want to. I feel like this is so important to the success of Mycroft. For them to make it in such away that it can be updated and stored entirely locally. But at the same time how its important to have a client/server style ability in order to make a voice assistant more of a complete automation solution.

It is possible to run mycroft standalone, with a few tweaks, some things do require api s such as wolfram alpha and open weather map, etc. We use Google STT (untill we can get OpenSTT off of the ground) but there is an outstanding PR here to use a local Kaldi server, check it out!

1 Like

Hi all!
I want to run Mycroft standalone for contributing to it in this mode.
I tested PiCroft, tweaked configs and it seemed fine (i.e. PiCroft not talking to Home). As I’m also interested in local STT server and Pi doesn’t seem to be able to handle Kaldi (correct me if I’m wrong) due to higher computation resources that Pi can offer, I switched to Ubuntu installation.
It went ok, I made it to install Kaldi, tweak configs but I’ve got two issues now:

  1. mycroft.client.speech.listener - ERROR - global name 'requests' is not defined followed by mycroft.client.speech.listener - ERROR - Speech Recognition could not understand audio: I couldn’t find out why that seems to be a problem (mycroft-core version: 0.8.13) to import requests?
  2. When changed server entries in mycroft.conf to the following:
    "server": { "url": "", "version": "v1", "update": false, "metrics": false }, Ubuntu-based installation keeps trying to connect to the server (resulting in: ConnectionError: HTTPSConnectionPool(host='', port=443): Max retries exceeded with url: /v1/device//setting (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f7ca2748490>: Failed to establish a new connection: [Errno 111] Connection refused',)), from /usr/lib/python2.7/", line 801, in __bootstrap_inner through /opt/mycroft/skills/skill-configuration/", line 55, in notify all the way down to ~/.virtualenvs/mycroft/local/lib/python2.7/site-packages/requests/", line 487, in send
    I believe one of skills could insist on connecting to the Mycroft API but I did set false for proxy parameter of Weather and WolframAlpha skills (the ones using API keys based on Mycroft servers, if I’m now wrong).

I hope fixing the above will let me test the local Kaldi STT and I’ll be happy to share all of it, e.g. on Github (I followed the PR#440 / Issue-438 when setting things up, using docker image of Kaldi gstreamer server).

This is a must in my opinion for mycroft to reach its adoption goals… in one hand you have many people who are tired of companies mining data which is why they stay away from products like Nest, Echo, Alexa, Cortana, Siri etc…They are also tired of the fact that its only a matter of time before your cloud is breached. When it comes to a device that is listening 24/7 most people from this camp do not want large corporations holding their data in the cloud.

there are many startups across the board who recognize this trend and as such are taking customer base away from the big data minging companies because they actually respect their customers privacy… So you will miss out on not only these privacy concerned people but also the companies who wish to use your tech to build a product that is privacy focused.

Personally, I do not use any AI or speech recognition products not even on my cell. That is why mycroft caught my attention but I am glad I found this thread before beying the device. I don’t want a product that sends data to google or microsoft.

So I will patiently wait to see how you guys solve these issues concerning the anti data mining movement… Not quite sure where everyone lost their minds and decided it was okay for large corporations to mine everything about us including in our homes without our permission and in some cases “opt-out” it should be opt in, privacy in our homes is a basic human right and should not be infringed.

1 Like

I found this, high speed and good accuracy its kaldi on steroids this:

Sorry I am late to the Party.
For a system that claims not to collect your data, a self hosted home server is a must.
Has there been any progress on the topic?

Thanks so much for your feedback, @Chris_Schantz - and for your ideas on home automation. A lot of the people who use Mycroft use it in conjunction with tools like Home Assistant .

Having a full self-hosted version of Mycroft is possible at the moment, but doing so is not well-documented and requires significant technical expertise. In summary, you would need to replicate a lot of the functionality of - which abstracts a whole bunch of API requests and manages Device settings.

It is definitely one of our most requested features, and is something we’re working towards, but don’t have a target date as yet. Our key challenges in this space are obviously on-board STT. At the moment our default STT engine does processing in the cloud, because the on-board processing power of the RPi 3 B is not super great. We want to use DeepSpeech see here for blog post, but the on-board processor needs to be a bit gruntier to do that.

But yes, a frequently-requested architectural feature!

Best, Kathy

Hi Kathy,

I am also interested in a self-hosted version of Mycroft, but understand the challenges of building an STT model your self that is as good as the state-of-the-arts. For now, I am interested in creating new skills that can connect to my privately hosted web service to answer questions. Is this possible using any Mycroft platform? Maybe Mycroft Linux is a better option?

I am a python developer. I have looked into the Mycroft documentation and existing skills in github. The new skill documentation provides information on how to extend the intent_handler to support a wider range of inputs from user. But I need some examples on querying data from disk or private web service. I briefly read about the existing skills but could not find one related to custom data source. Can you please provide some instructions/information on how to do this?

Hi @silvia, thanks for your interest in Mycroft.

Let me take your questions one at a time:

  • Can Mycroft connect to a privately hosted web service
    Well, it depends on how the web service makes itself available. If it is an API that is internet accessible, even if the API requires authentication, then it is possible in a Skill. If the private web service is not on the public internet, then you will either have to have the Mycroft Device on the same network, or create something like a VPN tunnel so that the Mycroft Device can send and receive data from the private web service. How is the private web service set up?

  • Is Mycroft for Linux a better option
    Mycroft for Linux is recommended for Skills development, yes.

  • Skill examples for querying data from disk or private web service

    • The Aircrack Skill makes calls out to system functions
    • Skills are written in Python, so should be able to use standard file system operations
    • Private web service - if you provide more information on how this is set up, I can advise further.

Best, Kathy

1 Like

Thanks for the quick response. Can you also let me know what is the best way to display graphs or charts? Are there any existing skill that does it already.

1 Like

Hi @silvia are you able to provide some more information around what you’re wanting to do? Do you want to display graphs and charts on the faceplate of the Mark 1?

We have some documentation available on using the Mark 1 display.

Hi Kathy! Thanks for taking the time to explain your goals and ambitions around this feature request.

After reading your explanation on the current implementation of STT processing, I had a question about the data flow. Assuming someone has an RPi with Mycroft on it and a Mycroft Home account, when they prompt a skill to the listening Mycroft device, the device first sends a recording of the voice command to Mycroft home which passes it to the cloud hosted STT processing engines. After that processing has taken place, the response from the STT processing engines goes back to Mycroft home, which then takes the processed request and matches it to a skill. Then the response is sent back to the individual Mycroft device from which the request originated. Is that an accurate (if not simplistic) understanding of how Mycroft works with Mycroft Home?

You mentioned that the RPi 3 B doesn’t have a grunty enough processor to handle STT, which is understandable, but it seems like it would be unnecessary for the RPi to do the STT processing if it could instead pass that off to a server running in the home that can handle the STT processing.

I’ve probably far simplified things, but what is preventing a solution in which an RPi Mycroft device communicates with a locally hosted STT processing server?

Nothing prevents that you from hosting locally at this point other than your own resources. See also the Personal Backend thread elsewhere in the forum.

1 Like

This is close but not 100% accurate as the “matching to skill” part (intent parsing) is done by your local Mycroft running on the RPI

This is possible but with current STT algorithms a powerful machine (lots of RAM and Nvidia GTX GPU) is required for “real time” processing as you don’t want to wait a minute for a response until the STT has decoded the audio to text. As far a I understand the Mozilla Deepspeech project is working on a major improvement of their STT algorithms which runs on RPI in (near) real time…