Introduction: Steve Penrod, CTO of Mycroft AI

I’d like to introduce myself to the Mycroft community. My name is Steve Penrod, and as of last week I’ve come into the Mycroft organization as the CTO via a merger between my company and Mycroft. Now that I’ve given you the tl;dr version, let me back up a little…

After wrapping up my run at my previous startup, during the summer of 2014 I was struck by the maturation of several key technologies. After prototyping and validating some of my beliefs I began developing a natural-language, conversational voice interface built using single-board computers (sound familiar?). I’ve been doing this in Kansas City and operating in stealth mode until this spring. At that time I began reaching out to potential partners and was asked how I was related to Mycroft, since surely we know each other given the geographic and technological coincidence. But before April we’d never heard of the other.

Soon after this a meeting was arranged and Josh and Ryan met me for coffee. We had a great conversation, sharing our design ideas and goals. The next week I visited the Mycroft offices and attended their Techstars Demo Day debut. But we each had slightly different views of how to bring our idea to the world (I was pursuing partnerships with some large organizations whose names you likely know) and briefly fell off of each other’s radar.

Fast forward a few months and a few product announcements, and Ryan reached out and asked if I would come back to Lawrence to chat. He proposed the idea of us joining forces and after some pondering the idea began to grow on me. At the start of July I began to hang out at the Mycroft offices as an observer and adviser. I demo’ed the system I’d created and freely shared many of the thoughts and philosophies behind my efforts within the team. I got to know the Mycroft internal team and began to follow and occasionally jump in on the Mycroft community discussions. I really enjoyed the experience and I saw some great potential – there were certainly overlaps, but more importantly there were lots of unique strengths.

I worked with everyone at Mycroft to craft merger agreement and a structure we all liked so we could move forward. I’ve been in technology product development for over 25 years and love building teams to harness new technologies. My original intention was to free Ryan of the CTO duties so he could focus on the Mycroft evangelization and community-building that he loves and excels at. However – as you may have read already – he has decided to take this opportunity to pursue other passions and help other projects. I am grateful to Ryan for all of his efforts in getting Mycroft rolling and promise to do my best to honor the vision he helped build.

The curious can read about my professional background on LinkedIn. I’m looking forward to bringing Mycroft to the world with you!


Welcome, Steve! I became a Mycroft backer after hearing Ryan talk about the project on the Linux Action Show podcast, so it’s sad to see him go. But it sounds like both sides are making the best of it.

Can you offer us any insight into how you think your conversational voice interface experience will drive the project toward success? Does this mean superior context recognition? What kinds of technical contribution are you looking for from the community at this point?



Why does this worry me?

Hey guys,

Sorry about the slow reply, the last two weeks have been a real whirlwind. (Look for a blog post tomorrow about why.)

I hope to have time in the next few weeks to show off some of what I built with my Christopher project, but some of the big differences are in its understanding of Context. Context is what allows us to converse efficiently. For example, the phrase: “How about on Thursday” in and of itself is useless, even to a human. But that phrase is used in this exchange:

Me: Christopher, what is the weather going to be like tomorrow?
Christopher: Sunny and 83 degrees
Me: How about tomorrow?
Christopher: There is a chance of rain

On the other hand, the exact same phrase can also mean something totally different in a different context:

Me: Christopher, what is on my calendar for this afternoon?
Christopher: There are two events: Phone call with Stu at 2:00, Play practice at 5:30
Me: How about tomorrow?
Christopher: Nothing is on your calendar

So the Context here is based on previous interactions. In both those cases the interaction was between the human and one “Skill”, but people aren’t really single-threaded. We bounce from thing to thing and back again. So in the middle of those I could have asked Christopher to turn on a light, and he would still know that we had recently been talking about my calendar or the weather. So there are several conversational Contexts involved.

In addition to the Context of the spoken words, there are other contexts that help with interpreting words. Such as:

This allows me to know when I say “turn on the light” which light to turn on. I.e. if I’m in the kitchen when I say it, I mean the kitchen light.

When I say “what is on my calendar”, I want to know what is on my calendar, not my son or daughter’s. Likewise, they want to hear about their calendars, not mine.

Visible environment:
When a recipe is showing on a monitor or TV that I can see in the room, that is part of the context of the room. So the phrase “show me the ingredients” should make sense even if I haven’t said anything about recipe’s for several hours, as long as it is still being displayed.

Audible environment:
“Skip this” should be understandable when music is playing thru my home theater system.

Isn’t it cool when the lights just turn on when you walk in the room? Or to be reminded of an appointment in whatever room you are standing in?

All of the above examples are legit real interactions, not future stuff. I can’t make it available immediately in the Mycroft platform because of some architectural stuff that has to be ironed out, but it is coming. Trust me, nobody is more anxious to get this out to the world than me!

As for community contribution, I have to admit that it is kinda a new concept to me. I grew up in the heyday of push distribution, where organizations like Autodesk crafted the tools to pass on the the consumers and the main contribution in return was feature requests. My previous company served largely non-technical users, so even though it existed during the connected era of the Internet there weren’t many who were capable of giving back technically. So having a community that is developing a Text to Speech engine, Android and Linux desktop integration, and more is amazing! Asking for more seems almost greedy. :slight_smile:

I think the biggest technical contribution that I’m going to be asking for soon is understanding from Skills developers when I make some breaking changes to the Skills APIs. There are some architectural things that need to change substantially to support what I’m talking about, and that might mean it makes more sense to break a few existing Skills today so it can be done quickly before there are dozens or hundreds of Skills. I think it will be worth it.

And ezieger, please don’t worry! Ryan left knowing that Mycroft was in caring hands. :slight_smile:

  • Steve

Looking forward to seeing those context features in action! Thanks for the response, Steve.

how do I get started with the raspberry or android set up?