Would love to understand how to build audio applications without requiring SharedArrayBuffer #22381
Unanswered
patrick99e99
asked this question in
Q&A
Replies: 1 comment
-
The |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have been working on a project recently that involves the web audio api, in particular, interacting with emulators of various sound chips.
I followed this guide:
https://emscripten.org/docs/api_reference/wasm_audio_worklets.html?highlight=audio%20worklet
I ended up making multiple calls to
emscripten_create_wasm_audio_worklet_processor_async
so that I have a worklet running for each sound chip.What I am trying to do is make it so that my JS code can trigger a sound chip, tell it to play, and then call
emscripten_audio_worklet_post_function_v
to tell the main thread when it's finished playing. This was all fine and dandy, until I realized compiling with-s AUDIO_WORKLET=1 -sWASM_WORKERS=1
expects there to be a SharedArrayBuffer available...Ultimately this project is going to be hosted on multiple platforms on servers that I do not control, and requiring them to have SharedArrayBuffer accessible is problematic.
So, I look at projects like snes9x (super nintendo emulator), which has been ported to the browser using emscripten, and that generates real-time audio no problem without the requiring a crossOriginIsolated environment... That makes me think that I am going about this all wrong, and there should be a way for me to do what I want to do without needing SharedArrayBuffer.
I am hoping someone can shed some light on this, and point me in the right direction as far as how I can generate real-time audio, and let my front end JS code know when it's done, without requiring SharedArrayBuffer.
Beta Was this translation helpful? Give feedback.
All reactions