-
Notifications
You must be signed in to change notification settings - Fork 27
Add audio processing module #99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Allocated audio frame data is not disposed of when allocated by Unity
Hey all, I've been keeping an eye on this PR, and just give this a spin, and while this works on mac/windows, it fails on android arm64 ( I assume it would be the same for the other arm architectures ). I think the latest android .so does is not up to date:
I noticed the install.py does not have android listed in the platforms, but I manually downloaded and replaced the |
Hi @holofermes, thank you for reporting this. Android should definitely be included as one of the platforms in |
Runtime/Scripts/ApmReverseStream.cs
Outdated
{ | ||
while (true) | ||
{ | ||
Thread.Sleep(Constants.TASK_DELAY); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we're likely going to have a skew here. (this will impact the AEC a lot in long room duration)
Is there a way to directly process the frames as we receive them?
Runtime/Scripts/ApmReverseStream.cs
Outdated
|
||
private void OnAudioRead(float[] data, int channels, int sampleRate) | ||
{ | ||
_captureBuffer.Write(data, (uint)channels, (uint)sampleRate); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could directly use ProcessReverseStream here?
Runtime/Scripts/RtcAudioSource.cs
Outdated
@@ -101,78 +103,67 @@ private void Update() | |||
while (true) | |||
{ | |||
Thread.Sleep(Constants.TASK_DELAY); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will also get a skew here, so as soon as we're a bit late, we're going to hear bad quality input (jittery audio).
It's OK to push faster than realtime, the Rust-SDKs will handle it in a high precision queue
I see the TASK_DELAY is 5ms. |
Hi @theomonnom, thank you for your feedback. Yes, there does appear to be a skew for longer room durations. I've moved the calls to the APM methods directly into the audio filter callbacks, however, this seems to introduce some audio artifacts that I haven't been able to explain yet. I think the issue is related to the forward stream being processed before the reverse stream, however, I need to do some more investigation to see if this is the case. |
I think it's most likely because this function is too slow:
You could also try to increase the default DSP buffer of Unity |
private void OnAudioRead(float[] data, int channels, int sampleRate) | ||
{ | ||
_captureBuffer.Write(data, (uint)channels, (uint)sampleRate); | ||
while (true) | ||
{ | ||
using var frame = _captureBuffer.ReadDuration(AudioProcessingModule.FRAME_DURATION_MS); | ||
if (frame == null) break; | ||
|
||
_apm.ProcessReverseStream(frame); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this one too?
This PR adds support for the WebRTC audio processing module and enables AEC for microphone tracks.