You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This demo application belongs to the set of examples for LightningChart JS, data visualization library for JavaScript.
6
-
7
-
LightningChart JS is entirely GPU accelerated and performance optimized charting library for presenting massive amounts of data. It offers an easy way of creating sophisticated and interactive charts and adding them to your website or web application.
8
-
9
-
The demo can be used as an example or a seed project. Local execution requires the following steps:
10
-
11
-
- Make sure that relevant version of [Node.js](https://nodejs.org/en/download/) is installed
12
-
- Open the project folder in a terminal:
13
-
14
-
npm install # fetches dependencies
15
-
npm start # builds an application and starts the development server
16
-
17
-
- The application is available at *http://localhost:8080* in your browser, webpack-dev-server provides hot reload functionality.
18
-
19
-
20
-
## Description
21
-
22
-
This example shows a simple use case for Heatmaps as a spectrogram.
23
-
24
-
Spectrogram is a visual representation of the spectrum of frequencies. Spectrograms can be used to visualize any wave form. Most often spectrograms are used to display audio signals.
25
-
26
-
This example loads a audio file and shows spectrograms for each channel on the audio file.
27
-
28
-
The spectrogram shows frequency on one axis (Y Axis) and time on the other (X Axis). The color of a the heatmap at any point is the amplitude of the frequency at the specified time.
29
-
30
-
## Getting the data
31
-
32
-
First the audio file that will be shown is loaded with [fetch][fetch] and converted into an ArrayBuffer.
33
-
34
-
```js
35
-
constresponse=awaitfetch('url')
36
-
constbuffer=awaitresponse.arrayBuffer()
37
-
```
38
-
39
-
This example uses the [Web Audio APIs][web-audio-api] to retrieve the frequency data to display in the heatmap. These APIs make it easy to work with audio files and manipulate the files. For spectrogram use the [AnalyzerNode][analyzer-node] is the most useful part of the API as it provides [getByteFrequencyData][getByteFrequencyData] method which is a implementation of [Fast Fourier Transform][fft].
40
-
The AudioContext contains method to convert an ArrayBuffer into an [AudioBuffer][AudioBuffer].
Now that the audio file is converted into a AudioBuffer it's possible to start extracting data from it.
47
-
48
-
To process the full audio buffer as fast as possible, a [OfflineAudioContext][OfflineAudioContext] is used. The OfflineAudioContext doesn't output the data to a audio device, instead it will go through the audio as fast as possible and outputs an AudioBuffer with the processed data. In this example the processed audio buffer is not used, but the processing is used to calculate the FFT data we need to display the intensities for each frequency in the spectrogram. The audio buffer we have created is used as a [buffer source][createBufferSource] for the OfflineAudioContext.
49
-
50
-
The buffer source only has a single output but we want to be able to process each channel separately, to do this a [ChannelSplitter][createChannelSplitter] is used with the output count matching the source channel count.
This makes it possible to process each channel separately by making it possible to create AnalyzerNodes for each channel and only piping a single channel to each analyzer.
57
-
58
-
A [ScriptProcessorNode][createScriptProcessor] is used to go through the audio buffer in chuncks. For each chunk, the FFT data is calculated for each channel and stored in large enough buffers to fit the full data.
59
-
60
-
Last [startRendering()][start-rendering] method is called to render out the audio buffer. This is when all of the FFT calculation is done.
61
-
62
-
## Showing the data
63
-
64
-
Each channel of the audio file is shown in it's own chart inside a single dashboard. When the data is calculated, a dashboard is made. This dashboard is then passed to functions that setup the charts inside the dashboard and create the heatmap series based on the script processor buffer size and fft resolution.
65
-
66
-
The data from the audio APIs is in wrong format to display in the heatmap without editing it. The heatmap data has to be mapped from the one dimensional array it was generated in to a two dimensional array. This mapping in this example is done with `remapDataToTwoDimensionalMatrix` function, this function maps the data in columns. If the heatmap was displayed vertically, then the mapping would be easier as each stride of data could just be placed as a row.
This demo application belongs to the set of examples for LightningChart JS, data visualization library for JavaScript.
6
+
7
+
LightningChart JS is entirely GPU accelerated and performance optimized charting library for presenting massive amounts of data. It offers an easy way of creating sophisticated and interactive charts and adding them to your website or web application.
8
+
9
+
The demo can be used as an example or a seed project. Local execution requires the following steps:
10
+
11
+
- Make sure that relevant version of [Node.js](https://nodejs.org/en/download/) is installed
12
+
- Open the project folder in a terminal:
13
+
14
+
npm install # fetches dependencies
15
+
npm start # builds an application and starts the development server
16
+
17
+
- The application is available at *http://localhost:8080* in your browser, webpack-dev-server provides hot reload functionality.
18
+
19
+
20
+
## Description
21
+
22
+
This example shows a simple use case for Heatmaps as a spectrogram.
23
+
24
+
Spectrogram is a visual representation of the spectrum of frequencies. Spectrograms can be used to visualize any wave form. Most often spectrograms are used to display audio signals.
25
+
26
+
This example loads a audio file and shows spectrograms for each channel on the audio file.
27
+
28
+
The spectrogram shows frequency on one axis (Y Axis) and time on the other (X Axis). The color of a the heatmap at any point is the amplitude of the frequency at the specified time.
29
+
30
+
## Getting the data
31
+
32
+
First the audio file that will be shown is loaded with [fetch][fetch] and converted into an ArrayBuffer.
33
+
34
+
```js
35
+
constresponse=awaitfetch('url')
36
+
constbuffer=awaitresponse.arrayBuffer()
37
+
```
38
+
39
+
This example uses the [Web Audio APIs][web-audio-api] to retrieve the frequency data to display in the heatmap. These APIs make it easy to work with audio files and manipulate the files. For spectrogram use the [AnalyzerNode][analyzer-node] is the most useful part of the API as it provides [getByteFrequencyData][getByteFrequencyData] method which is a implementation of [Fast Fourier Transform][fft].
40
+
The AudioContext contains method to convert an ArrayBuffer into an [AudioBuffer][AudioBuffer].
Now that the audio file is converted into a AudioBuffer it's possible to start extracting data from it.
47
+
48
+
To process the full audio buffer as fast as possible, a [OfflineAudioContext][OfflineAudioContext] is used. The OfflineAudioContext doesn't output the data to a audio device, instead it will go through the audio as fast as possible and outputs an AudioBuffer with the processed data. In this example the processed audio buffer is not used, but the processing is used to calculate the FFT data we need to display the intensities for each frequency in the spectrogram. The audio buffer we have created is used as a [buffer source][createBufferSource] for the OfflineAudioContext.
49
+
50
+
The buffer source only has a single output but we want to be able to process each channel separately, to do this a [ChannelSplitter][createChannelSplitter] is used with the output count matching the source channel count.
This makes it possible to process each channel separately by making it possible to create AnalyzerNodes for each channel and only piping a single channel to each analyzer.
57
+
58
+
A [ScriptProcessorNode][createScriptProcessor] is used to go through the audio buffer in chuncks. For each chunk, the FFT data is calculated for each channel and stored in large enough buffers to fit the full data.
59
+
60
+
Last [startRendering()][start-rendering] method is called to render out the audio buffer. This is when all of the FFT calculation is done.
61
+
62
+
## Showing the data
63
+
64
+
Each channel of the audio file is shown in it's own chart inside a single dashboard. When the data is calculated, a dashboard is made. This dashboard is then passed to functions that setup the charts inside the dashboard and create the heatmap series based on the script processor buffer size and fft resolution.
65
+
66
+
The data from the audio APIs is in wrong format to display in the heatmap without editing it. The heatmap data has to be mapped from the one dimensional array it was generated in to a two dimensional array. This mapping in this example is done with `remapDataToTwoDimensionalMatrix` function, this function maps the data in columns. If the heatmap was displayed vertically, then the mapping would be easier as each stride of data could just be placed as a row.
0 commit comments