Releases: SiliconLabs/mltk
Releases · SiliconLabs/mltk
0.10.0
Bug Fixes/Improvements
- Set
tflite-support<0.4.2
in setup.py. This works around dependency issue wheretflite-support>=0.4.2
requires flatbuffer>=2.0 while Tensorflow requires < 2.0 - Fixed bug in AudioFeatureGenerator Python wrapper causing dynamic quantization during training to not work correctly
- Re-trained keyword_spotting_on_off_v2 using fixed AudioFeatureGenerator Python wrapper
- After training completes, do not automatically run evaluation if the model does not inherit EvaluateMixin
0.9.0
New Features/Improvements
- Added support for training on a remote SSH server: mltk ssh train
- Added tutorial for model training in the "cloud": Cloud Training Tutorial
- Updated ParallelAudioDataGenerator and ParallelImageDataGenerator to use joblib
0.8.0
New Features/Improvements
- Added Pac-Man demo: https://mltk-pacman.web.app
- Added Pac-Man tutorial
- Added BLE Audio Classifier C++ example application
- Add keyword_spotting_pacman reference model
- Added CLI option to disable GPU
- Optimized MVP-accelerated Conv2D kernels. Improved latency by 2x at expense of additional RAM
- Added Supported Hardware documentation page
Bug Fixes
- Fixed issue with reporting required tensor memory during profiling
- Fixed issue with reporting unsupported layers when profiling on device
- Fixed issue with recording audio when using activity detection block
- Fixed issue with building MVP Python wrapper on Winodws
0.7.0
General Updates
- Updated to Gecko SDK 4.1.0
- Updated to Tensorflow-Lite Micro June 8th, 2022
- Updates to support Tensorflow-2.9
New Tutorials
See all tutorials here
New C++ Examples
- fingerprint_authenticator
- image_classifier
- Updated audio_classifier to support new Audio Feature Generator settings
See all C++ examples here
New Reference Models
See all reference models here
Other Changes
- Added new settings Audio Feature Generator:
- fe.activity_detection_enable - Enable the activity detection block. This indicates when activity, such as a speech command, is detected in the audio stream
- fe.activity_detection_alpha_a - The activity detection “fast filter” coefficient. The filter is a 1-real pole IIR filter: computes out = (1-k)in + kout
- fe.activity_detection_alpha_b - The activity detection “slow filter” coefficient. The filter is a 1-real pole IIR filter: computes out = (1-k)in + kout
- fe.activity_detection_arm_threshold - The threshold for when there should be considered possible activity in the audio stream
- fe.activity_detection_trip_threshold - The threshold for when activity is considered detected in the audio stream
- fe.dc_notch_filter_enable - Enable the DC notch filter. This will help negate any DC components in the audio signal
- fe.dc_notch_filter_coefficient - The coefficient used by the DC notch filter, DC notch filter coefficient k in Q(16,15) format, H(z) = (1 - z^-1)/(1 - k*z^-1)
- fe.quantize_dynamic_scale_enable - Enable dynamic quantization of the generated audio spectrogram. With this, the max spectrogram value is mapped to +127, and the max spectrogram minus fe.quantize_dynamic_scale_range_db is mapped to -128. Anything below max spectrogram minus fe.quantize_dynamic_scale_range_db is mapped to -128.
- fe.quantize_dynamic_scale_range_db - The dynamic range in dB used by the dynamic quantization
- samplewise_norm.rescale - Value to scale each element of the sample: norm_sample = sample * . The model input dtype should be a float32
- samplewise_norm.mean_and_std - Normalize the sample by the mean and standard deviation: norm_sample = (sample - mean(sample)) / std(sample). The model input dtype should be a float32