Mycroft performance

Hey there i’m in a traineeship on Mycroft and need to evaluate it’s performances . thought of preparing a document with two parts; performance evaluation from creaters of the project and i need your help @KathyReid , @Jarbas_Ai , @Wolfgange , @krisadair or any member of mycroft ai Inc. the second part would be based on experimentation i do as i installed Mycroft-core for ubuntu (virtual machine)

merci d’avance :slight_smile:

Hi @Med, thanks for your message.

What specific information are you looking for? Do you have a list of questions? What about the performance do you need to evaluate?

Kind regards,
Kathy

tell me please about how much it’s elastic to vocabulary we introduce to it when asking sthing (the basic skills which conception and development was done by the company itself)

I’m sorry, I don’t understand?

Let’s talk a little bit far. can you provide me with the information on minimum requirements to install Adapt on an embedded system on this web page “https://mycroft.ai/documentation/adapt/”(section Is Adapt right for me and my usecases? -> Lightweight)
PS: hope my english is not too bad( not my first language)
:slightly_smiling_face:

im not part of mycroft :stuck_out_tongue: but let me know how i can help

maybe it is easier if you go to mattermost chat?

some quick data:

  • vocabulary is very elastic, you can use 2 approaches, adapt or padatious, in adapt you select keywords, read guideline 4 https://jarbasai.github.io//posts/2017/10/skill_guidelines_1/
  • check how padatious works here https://mycroft.ai/documentation/padatious/ , for this you just give sample commands and it just learns what you want it to do
  • performance is mainly affected by your platform (picroft is slower than desktop), internet speed (many skills require you to use the internet), Speech to Text (time for you to finish speaking + time to send audio data to the cloud and get result), choosing a skill (if you have many fallbacks mycroft will try them all) and TTS (generate the speech and play the wav file), TTS caches the utterances, so unless you reboot the second time mycroft tells you the same thing should be faster

Hello, @med. I want to make sure you are clear on the various technology pieces that make up the Mycroft solution.

The code we call mycroft-core connects many different pieces into a whole voice interaction system. The basic pieces are: wakeword, speech to text, intent processing, skills, text to speech

  • Wakeword
    Originally we built on top of PocketSphinx from CMU, but we have switched the default “Hey Mycroft” wakeword spotter to our Precise technology.

  • Speech to Text
    The current default is to use the Mycroft server, which acts as a proxy to Google STT. We have just merged code to use the new DeepSpeech-based service running on a Mycroft server.

  • Intent processing
    We have two technologies that work in parallel. Adapt is the first, which is a keyword-based system. The author can build complex matching rules based on these keywords and/or regex. The keywords can have many different values (included in the .voc file) and can also be treated as optional. Padatious is an example-based system. The author provides examples of activation phrases, then it uses a neural net approach to determine similarity.

  • Text to Speech
    The original and current default engine is called Mimic. This evolved from the CMU FLITE system. More recently we began working on Mimic 2 which uses a Tacotron-based architecture. Code is available already, and it will be hooked in to the main systems soon.

I hope this helps you understand the system a little better. The system was designed to run on fairly low-power equipment – things like a Raspberry Pi 3 or better. The vast code runs under Python 2.7 and is switching to Python 3, with a few exceptions – mainly, Mimic is written in C.

As for Adapt itself, it should be fine on most any embedded system that can run Python.

1 Like

Hi @steve.penrod. The informations were really helpful. So if i understood well all components of the system are installed and executed on the device except for speech to text tasks which are done on the google servers, i.e at the moment there’s no mycroft Inc server somwhere in the whole chain.
:slight_smile:

hi @Jarbas_Ai. your help is of big value. Thank you very much. I’m extremely new and unfamiliar to mattermost. Can you provide me with other contacts of you (What’sapp, email, Facebook, telephone).
you can provide me first with instructions to set and launch the Jarbas-core server. Any extra information on this server would be appreciated
Thanks in advance for being comprehensive.
:slight_smile: