When you open an issue for a feature request, please add as much detail as possible:
Currently, the interface exposed in SpeechToTextV1/SpeechToText+Recognize.swift only leaves a SpeechToTextSession alive for the time that it takes to transcribe a Data blob.
We should add support to send smaller chunks of data in realtime as a part of one session, to support streaming audio applications that are not driven via the microphone.