Noise isolation using video

I know not everyone want another video camera in their house, but for people who are ok with it (or in places where its more comfortable), this look promising to help clean up audio.

At least to me, the best path for this would be create an pulseaudio module using LADSPA (a simple pulseaudio module api) that takes in audio run through network and outputs back through a new output (sink).

The one hurdle for me is writing C which is what the modules assume you are writing in. This might be solved with using the jack audio server so that we can use JACK-Client · PyPI.

I am guessing that creating a JACK-client port and reading the data (or as numpy array) to the neuralnet pushed out to a new source. Then modify mycroft’s pactl to use this new source.

Other issue, I have no idea how the net is expecting the data to come in. If I used pyaudio to create an audio stream (which doesn’t seam to be intened from the args in the script I was looking at, they are looking for folders), and cv2 for video data, I don’t know if the net will take those formats or if some massaging might be nesccery.

I created a reddit post, with a little more formal idea written out:

My Start of the proccess:

Very rough, and untested. I simply reused the example code for pass-thru audio from jack-client and replaced the generic proccess with one that pass an numpy array of the audio and video from camera “0” to a function that currently just return back the audio unchanged. (if this works on first run I’m quiting my day job lol)


That is pretty amazing! Interested to hear how your experiments work out

Did you see this new hearing aid?

1 Like

That is really cool!