[SOLVED] What voice does Mycroft use?

Am I correct in understanding that Mycroft uses Festival Lite for text to speech software?

If so, which voice does it use?

If not, what software does it use for TTS?

Thanks you.

Hi there @leematt,

Mycroft uses Mimic, which is based on CMU Festival Lite, for text to speech.
By default, the voice used is ‘Alan Pope’ (AP) - but you can change this in your account at;

You can find out more information on Mimic at;

Let us know if we can help further,

Kind regards,

Hi Kathy!

I’ve given this a try (changing it at home.mycroft.ai) and the configuration doesn’t seem to be pushed down. Well, at least, not the voice configuration anyway. Any idea what might be wrong?

Thanks, Steve

EDIT: as a work around, I changed /etc/mycroft/mycroft.conf and changed “ap” to “slt”. I’m not sure if this will persist through a reboot, but at least I can play around :slight_smile:

1 Like

Not sure what might be wrong, @pickettster - I’m going to ping @forslund and see if he may have some suggestions.
Best, Kathy

I’ve been testing this a couple of times now and I’ve not been able to reproduce this yet.

If you had an entry in /etc/mycroft/mycroft.conf this would have override the settings from the web. (resolution order of configs are: WEBSETTINGS, /etc/mycroft/mycroft.conf, ~/.mycroft/mycroft.conf, later configurations override previous changes)

(From your comment above this sounds as the case since you changed from ‘ap’ to ‘slt’).

Not sure if there’s some threading issue or issue with the refresh logic.
If a new config is received (checked by hashing old web setting and new) all configs are reloaded and in the audio service the tts section is checked for changes. If changes are found the tts is shutdown and relaunched with new settings.

Will investigate further.

Hi All,

I was able to get it working over the weekend. I wonder if my unit had a bad connection to the internet because it does capture the new .conf file after about 30/60s. It’s quite funny because the first answer will be in one voice, and then the next reply will be in another :slight_smile:

Thanks for looking into this, but I think it’s OK now for me.


Hi,. Can custom voices be used? Like if I had my own could I somehow have it use that?

You can build custom voices, ie, the AP voice. To do your own for mimic start here:

For Mimic2, it’s quite a bit more involved (hardware/time). See https://github.com/MycroftAI/mimic2/blob/master/TRAINING_DATA.md
as well as https://github.com/keithito/tacotron for a bit more on training

Then you’d have to configure your instance to point to the relevant model once that’s built.

1 Like

Thanks! at the risk of sounding like a complete Noob (which I am) are these linux based, or windows GUI based? and is there a Windows GUI based program I could use to create it?

also, are there any kind of videos that woudl point me in the right direction to get started? I’ve seen things like you need to record yourself saying the stuff in the CMU Pronouncing dictionary, or can I just sentences out loud, and some how input the audio file as well as the text used?

I saw Microsoft Custom Voice, but that looks like its hosted. It almost looks like you record sentences, and upload the text and audio files, and it creates the voice, but it looks like its subscription based. Is there something like that but that will export the font in a format mycroft would understand?

We’re still working through some of our custom voice tools, hold tight, give us a few weeks and then there should be web-based tools available for voice creation.

The training side of things is all Linux.

OH that’s cool! where will info on that be posted so I can check in periodically?

Also, so I would make the voice, but then I would need to train it still, or would it do all that for me?

Sorry, very new at this, and when I look at some of the pages, I never know if the code listed is linux CLI or python, and if Python, where I’m supposed to run the code…

Sure, all our updates are posted to the Mycroft Blog - https://mycroft.ai/blog - and they are usually cross-posted here as well.

We’re still working out the mechanisms re: voice - for instance if you would need to train it or if you could pass us the recordings for us to train.

It can be a bit overwhelming at first, I’d recommend building Skills as a starting point;


Hey, just checking in, still looking like a few more weeks?

Also, I found two pages with sample voices using MIMIC2 links below. The earlier ones sound less synthesized, but less naturally flowing, the newer one sounds more naturally flowing, but a bit more synthase.

If you get the online way to create our own voices, which will it sound more like? or is that synthesized sound more related to tone of voice used to create the voice, and less about the algorithm?