Hi everyone, I am making a switch from the original Mycroft to Neon. I’ve tried following Neon documentation, but I’m having a difficult time trying to figure out how to do the following things
I used to do in Mycroft. I run a Raspberry Pi 4 and just used mycroft-core with its venv for everything. Help on any or all of these items is appreciated:
I am struggling with getting Docker working (probably due to relative paths and not sure how to fix that), but I’m wondering if that is best for my setup or if I can do pyenv like I used to do within the mycroft-core folder in my home directory? I tried running setup.sh in NeonCore and it spit out errors, probably due to incompatibilities with some of the dependencies (Python 3.6 appears to be the latest compatible version for the ovos-tts-plugin-mimic 0.2.8 dependency, while I have Python 3.11). The install instructions on GitHub don’t really work for my setup. How exactly should I do it on top of Raspberry Pi OS since it isn’t working? I have a feeling it might be slightly different.
How do I access the equivalent to mycroft-cli in Neon so I can manage everything? I want to do everything via the command line.
What is the process for creating skills? I used to be able to do most of it automatically via “mycroft-msk create” and then modify my locales and init.py of that skill and was ready to go just like that. How do I do that here with Neon? Is there anything as handy as mycroft-msk create?
How would I go about getting my custom precise trigger phrase on it? I have a model designed for precise 0.3.0. How (or rather, where) would I configure hotwords and put .pb files so I can reference the model and its thresholds?
I pretty much use nothing but skills I’ve created myself, which primarily run SSH commands since I use voice commands to multiple remote systems. How and where should I put my Mycroft skills I’ve created so far, into Neon? I don’t use GitHub with any of them, but they used to be in the /opt/mycroft/skills folder.
Does the syntax for the messenger service in scripts work pretty much the same way as Mycroft? I was referencing stuff from the import statement: “from mycroft_bus_client import MessageBusClient”. I have a master script designed for sending and emitting messages, primarily used for my breadboard status LEDs and buttons to control it on the Pi.
Help on any or all of these items is greatly appreciated. If I can get these specific things figured out, I should be in good shape to use Neon AI and do stuff I used to be able to do before the website migration. It is hopeless until I can get these things figured out.
Hello and welcome to the new forum! I’ll do my best to answer all of your questions in the context of running this on an RPi4 and hopefully this will be a basis for some documentation updates .
The latest dev Docker images will run on a Pi4/arm64 system thanks to @arborealfirecatin our Matrix chat. The docker-compose file from NeonCore should pull those images the and .env file can be updated or you could set those variables directly, i.e. export NEON_XDG_PATH="/home/$USER/neon".
I will mark the setup.sh script as deprecated since it hasn’t been used or maintained since we moved to Docker for development setups and Debos for prebuilt OS images. You may also want to use Neon OS instead of installing over RPi OS if you’re just running Neon on a Pi4.
neon-cli (pip install neon-cli-client) is basically the Mycroft CLI with some config options to use different paths. Neon mana also provides some good tools for interacting with Neon.
For MSK; I believe @mikejgray has a Projen project that implements the same functionality but I personally didn’t use either much.
Skills go in a skills directory under the XDG data directory, ~/.local/share/neon/skills
The Messagebus is basically the same and there are helper methods in mana (linked above) for CLI access. If you’re integrating another project, I’d recommend using ovos_bus_client as a direct replacement for mycroft_bus_client as it adds some better support for multiple client sessions and response routing if there are multiple clients connecting to a single core.
I hope these all help; I’d definitely encourage you to go either with the custom Neon OS image or Docker containers since those deployments are what get the most testing and have update systems in place.
The projen project that Daniel linked should handle your skill creation needs, but it won’t install them automatically. A big difference between Neon/OVOS and Mycroft is that all skills in Neon/OVOS are intended to be pip-installable, so instead of a very Mycroft-specific format, you’re just creating a standard Python package.
Retrofitting a Mycroft skill to use the latest practices is an option with the linked package. You should be able to use the old skill as-is, but some time this year that backwards compatibility will disappear, so this is a good opportunity to start porting your code.
I should have elaborated on this; as Mike noted, this path is there for backwards-compat. with the expectation that skills be packaged for installation with pip.
Hi,
Thanks for the response. I appreciate it. I am following up according to item prior item numbers.
I am having trouble getting docker-compose on the Raspberry Pi. The guides don’t show how to do it for arm64/aarch64, so that’s apparently why nothing works. How do you suggest I get docker-compose on that system? It was hard for me to find official sources on that. Also, I see a yaml file in the NeonCore folder. Will that work for when I run docker-compose up -d or do I need to do anything else first to get it running?
I never got it running yet due to the docker stuff, but after looking at the rest of the response, I had a couple other questions at the surface.
For 4, does that use mostly the same syntax as the mycroft hotwords? I’m only familiar with specifying the path to the pb file, setting the threshold and sensitivity levels. Will my existing model work with this? Also, do I need to specify a sound? I wasn’t doing that before and likely don’t have one. I was fine with how it was before.
For 6, I assume ovos_bus_client installs with pip? Do I need to run that in a special environment (like how I used to have Mycroft’s venv activated to use messagebus related stuff), or will it work anywhere on the system like any Python script? Are the message types more or less the same as Mycroft, like “recognizer_loop:record_begin”, or sending a pseudo utterance via “Message("recognizer_loop:utterance", {"utterances": ["lay down"], "lang": "en-us"})” etc?
Okay, that makes sense. The thing is though, my skills are very specific to my setup and to be frank, probably isn’t optimized for others to use right now. I am running very specific commands remotely on my computer(s) via SSH and it is kind of janky, but works enough for my setup. I would kind of prefer to keep them local if possible. I would have to do a lot of tailoring to make it accessible to the public audience and it may be quite a while before I get a chance to do that if at all. Also, it doesn’t account for the vast variations people would have in their setups and would need a massive refit to do that. I treated Mycroft as making Python custom voice command driven, so I did just enough to make that work for me.
How would you suggest I approach things regarding making them a Python package? Also, is there documentation or something on how things would end up getting packaged and the overall procedure for that?
Here’s an introduction to Python packaging in general. However, Neon/OVOS skills have a particular entrypoint setup in order for the system to recognize a given package as a skill, and it’s a little complex. That’s partly why I created the projen project linked earlier - it’s designed to take an existing Mycroft skill, with the --retrofit flag, and handle all of that for you. It should also recognize any imports that need to be updated to OVOS packages.
Making your skills into packages doesn’t require you to make them public, also. Pip has the capability to install a skill from your filesystem, or from a private GitHub repo, or any number of other ways to keep your skills from being publicly available.
I believe there is some documentation out there on manually modernizing a skill, but the documentation here will tell you what the projen project will and won’t do to your existing skill.
I won’t address the Docker-specific questions about Neon, but I did want to touch on a couple of your questions:
Precise is supported by OVOS/Neon: GitHub - OpenVoiceOS/ovos-ww-plugin-precise-lite If you have a .pb file, though, you may need to train a tflite model. I’m not super familiar with precise, but I know several people are still using their previously trained wakewords.
Core - OVOS Messages Documentation This is a WIP but should have most of the message bus types you’re looking for. Generally speaking, all the old message types will still work. Neon ships a tool (pip install neon-mana-utils) that lets you send arbitrary bus messages.
The Debian instructions should work for RaspiOS, but I’m not sure what RaspiOS ships by default. @goldyfruit might know as I think he’s running a bunch of Docker containers on RPi hardware (though I think not with RaspiOS).
That should work; there’s a note in there about editing the .env file to use absolute paths instead of relative ones which is necessary for recent versions of docker. Also, docker-compose is docker compose in newer versions of docker, but they should work the same in this case (I know there were some subtle changes to the spec when it was accepted into the official docker package).
I believe Mycroft always played a sound by default; in OVOS/Neon we made the sound optional per-WW so you could do things with multiple wake words like “Hey Mycroft” plays one sound and “Hey Neon” plays another, or you could (for example) have a hotword “shut up” that sends a command to mute rather than starting to listen that plays a different sound.
If you have Python3.11+ it won’t let you install outside of a venv, but pipx install will make a venv for you and add any package entrypoints to your $PATH. If you’re looking for something like an interactive CLI, I’d recommend neon-mana-utils. If you’re looking for something to use in Python scripts or snippets, then you’re better off creating a venv and probably using ovos-bus-client directly.
Pretty much, yes; I see @mikejgray already linked the full docs from OVOS.
Okay, so as it turns out, docker-compose doesn’t work, but docker compose (without a hyphen) does work on my system. Perhaps the NeonCore readme and instructions should get updated, or at least indicate it could be either one of those upon running it?
After I got things launched, I decided to try the “neon-cli-client”. Firstly, pip wouldn’t let me install it unless I specified the “–break-system-packages” flag to it. It seemed to work with that, but was that the right way to approach things? Currently, I have a test setup and can easily roll back if I screw stuff up until I get these things all worked out. Also, when I ran neon-cli (in the docker directory and after launching services), it doesn’t appear to be connected to anything. No output messages other than connected to the messagebus service, the mic level isn’t registering like Mycroft does, and typing manual commands doesn’t work either like it normally does. Is it properly connected to everything given I am running docker and installed neon-cli-client via pip install neon-cli-client --break-system-packages and rebooted?
Definitely. If you can send a PR adding that note it would be much appreciated . Otherwise, I have a list of documentation updates somewhere I will add that to.
Python 3.11 started enforcing rules against pip install outside of virtual environments (rightly so IMO). pipx install neon-cli-client (maybe after sudo apt install pipx) should install to a venv automatically and add the command to your $PATH.
If you have the messagebus connection message then the commands should work… For logs, you will probably need to start it with neon-cli --logs-dir /home/$USER/NeonCore/docker/xdg/state/neon/ or similar, depending on where docker is mounting that xdg directory. I don’t believe the mic level will work though; @JarbasAl or @goldyfruit can correct me if I’m wrong, but I think that’s been dropped from all of the current listener modules.
If you can check the logs (should be in the xdg directory next to docker-compose.yml by default) for errors, that will help figure out why you’re not getting responses.
I just tried converting one of my old Mycroft skills using projen new ovosskill --from "@mikejgray/ovos-skill-projen@latest" --retrofit in the root skill directory. I get the following error when trying to run it, which I believe is because the skill doesn’t have any personal information in it (which I prefer at the moment, at least until I curate my old GitHub).
The error is:
Project definition file was created at /home/pi/NeonCore/docker/skills/open-spirits-arise-equalizer-skill/.projenrc.json
vocab folder not found in original skill; skipping.
dialog folder not found in original skill; skipping.
regex folder not found in original skill; skipping.
intents folder not found in original skill; skipping.
Found manifest.yml, trying to extract Python dependencies
Reinitialized existing Git repository in /home/pi/NeonCore/docker/skills/open-spirits-arise-equalizer-skill/.git/
Error: Command failed: git commit --allow-empty -m "chore: project created with projen"
Author identity unknown
*** Please tell me who you are.
Run
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
to set your account's default identity.
Omit --global to set the identity only in this repository.
fatal: unable to auto-detect email address (got 'pi@raspberrypi.(none)')
at genericNodeError (node:internal/errors:984:15)
at wrappedFn (node:internal/errors:538:14)
at checkExecSyncError (node:child_process:890:11)
at Object.execSync (node:child_process:962:15)
at exec (/home/pi/.nvm/versions/node/v21.7.1/lib/node_modules/projen/lib/util.js:15:19)
at git (/home/pi/.nvm/versions/node/v21.7.1/lib/node_modules/projen/lib/cli/cmds/new.js:343:46)
at initProject (/home/pi/.nvm/versions/node/v21.7.1/lib/node_modules/projen/lib/cli/cmds/new.js:353:13)
at initProjectFromModule (/home/pi/.nvm/versions/node/v21.7.1/lib/node_modules/projen/lib/cli/cmds/new.js:294:11)
at Object.handler (/home/pi/.nvm/versions/node/v21.7.1/lib/node_modules/projen/lib/cli/cmds/new.js:95:26)
at /home/pi/.nvm/versions/node/v21.7.1/lib/node_modules/projen/node_modules/yargs/build/index.cjs:1:8993 {
status: 128,
signal: null,
output: [
null,
null,
<Buffer 41 75 74 68 6f 72 20 69 64 65 6e 74 69 74 79 20 75 6e 6b 6e 6f 77 6e 0a 0a 2a 2a 2a 20 50 6c 65 61 73 65 20 74 65 6c 6c 20 6d 65 20 77 68 6f 20 79 6f ... 282 more bytes>
],
pid: 3691,
stdout: null,
stderr: <Buffer 41 75 74 68 6f 72 20 69 64 65 6e 74 69 74 79 20 75 6e 6b 6e 6f 77 6e 0a 0a 2a 2a 2a 20 50 6c 65 61 73 65 20 74 65 6c 6c 20 6d 65 20 77 68 6f 20 79 6f ... 282 more bytes>
}
The projen code attempts to create a dev branch and set it as a default. Since you don’t have git fully configured, that command fails. You can set a fake username and email and it should work the next time you run the command.
Thanks for finding an edge case! I’ll get an issue open to have better support for that.
I just ran the projen command mentioned before (after specifying fake global Github creds) and it looked like it worked. When I try running voice commands from the skills, it went from saying there was some error regarding the skill (note it is identifying the correct skill to execute based on utterance), but now it acts like the skill is no longer there after doing the conversion with --retrofit. Normally when I say “open youtube” in Mycroft, for example, it remotely opens a browser on my computer with youtube.com. I have the skill in ~/NeonCore/docker/skills/[skill]. I did the todos to the best of my knowledge and it just acts like the skill isn’t there at all, saying “Sorry, that feature isn’t quite ready yet” (should say “opening…” if my skill is working correctly). According to the .env file, I believe I am using the correct folder, assuming the ${PWD} refers to that docker folder I am in. As I mentioned, it showed the name of the skill previously before running the --retrofit of projen.
Suggestions?
Also, what is the best way for me to see the debug output? I used to use mycroft-cli and it would show everything that was happening, including if a skill had an error loading as I develop or modify skills with mistakes. What would be the best way for me to do that nowadays with Neon? I’m open to using mana if it offers functionality for that. It is so difficult to pinpoint stuff without it.
Try pip install ~/NeonCore/docker/skills/[skill], restart your Neon services, and give that another try. If it works, you’ll want to add /home/neon/NeonCore/docker/skills/[skill] to your ~/.config/neon/neon.yaml file:
In Neon/OVOS, all skills are now Python packages, and as long as they’re installed in the virtual environment (and the skill loads), the system will register their intents.
I have not had much chance to work on this until now, so now I’m trying to get things working again. My Raspberry Pi install got all screwed up, so now I am starting over. I am having a hard time getting it to do what I was able to do before with it and would like some help.
I’ve installed docker, ran docker compose in NeonCore, which launches just fine. Previously, I was able to execute commands by saying “Hey Neon” and it would respond to basic commands like “tell me a joke”. Now when I try to replicate that on the new setup, it does not respond to anything, even though the containers are running.
What could I be doing wrong this time? I could have missed a step that used to be obvious to me, but now isn’t. It is worth noting that the containers were getting an error when trying to run docker compose, until I created a /xdg folder and gave it permissions. It worked the first time without having to do anything like that, and then the error came until I did that and it has been working ever since. Not sure if the containers are truly running, especially after that. As a side note, I have been learning Docker over these past months, but I still need to learn more about volume mounts and how they are implemented, etc. I’m running into more issues this time than I ever ran into when first setting it up. It is very strange.
The Compose file will have a volumes section for each service that has a volume mount. The left side of the colon is the local path, the right the path in the container. Make sure each local path exists. You can check on the container status with docker ps. You can also check each container’s logs with docker logs $CONTAINERNAME. That should be enough to get you going, but if something isn’t clear please reach out again!