Skip to content

02 MSSA

Konstantin Ibadullaev edited this page Oct 10, 2024 · 12 revisions

MSSA. Background.

Construction of a trajectory matrix

The first step in the MSSA is the same as for the SSA. One starts with the construction of the trajectory matrix. Although the construction procedure is quite similar, one should be aware of the size of the trajectory matrix. Let $s$ denote the number of the given time series, $N$ the length of each of $s$ time series, $L$ - window size and $2 \leq L \leq N/2$ and $L \in N$, $K=N-L+1$ The trajectory matrix for the MSSA results in a $L \times Ks$ matrix $X$. We simply stack the trajectory matrices $X^i$ for each of $s$ time series. The columns of the trajectory matrix $X^i$ for $i$-th time series comprise lagged versions of $i$ time series just like in SSA.

The $X^i$ matrices can stacked in a vertical or horizontal direction, however due to performance reasons for SVD, horizontal direction is often preferred.

Thus, each matrix $X^i$ for $i=0,1,2,...,s$ can be represented as follows:

$$X^i = \begin{bmatrix} f^i_0 & f^i_1 &f^i_2 & f^i_3 &\dots & f^i_{N-L}\\\\ f^i_1 & f^i_2 &f^i_3 & f^i_4 &\dots & f^i_{N-L+1}\\\\ f^i_2 & f^i_3 &f^i_4 & f^i_5 &\dots & f^i_{N-L+2}\\\\ \vdots & \vdots &\vdots & \vdots &\vdots & \vdots\\\\ f^i_{L-1}& f_{L} &f^i_{L+1} & f^i_{L+2} &\dots & f^i_{N-1} \end{bmatrix}$$

It is clear that the elements of the anti-diagonals are equal. This a Hankel matrix.

Finally, we obtain the resulting trajectory matrix $X$ as

$$X = \begin{bmatrix} X^0 & X^1 & X^2 & ... & X^s \end{bmatrix}$$

The second step is to decompose $X$. There exist several options, we focus on the SVD decomposition and the Eigendecomposition.

  1. SVD Decomposition

This option is used primarly when the time series is not too long, because SVD is computationally intensive algorithm. It becomes critical especially for the MSSA, where the trajectory matrix becomes extremly high-dimensional, because of stacking of trajectory matrices for each time series. The formula for SVD is defined as follows:

$$X=U \Sigma V^T$$

where:

  • $U$ is an $L \times L$ unitary matrix containing the orthonormal set of left singular vectors of $X$ as columns
  • $\Sigma$ is an $L \times K$ rectangular diagonal matrix containing singular values of $X$ in the descending order
  • $V^T$ is an $Ks \times Ks$ unitary matrix containing the orthonormal set of right singular vectors of $X$ as columns.

The SVD of the trajectory matrix can be also formulated as follows:

$$X = \sum^{d-1}_{i=0}\sigma_iU_iV_i^T = \sum^{d-1}_{i=0}{X_i}$$

where:

  • ${\sigma_i,U_i,V_i}$ it the $i^{th}$ eigentriple of the SVD

  • $\sigma_i$ is the $i^{th}$ singular value, is a scaling factor that determines the relative importance of the eigentriple

  • $U_i$ is a vector representing the $i^{th}$ column of $U$, which spans the column space of $X$

  • $V_i$ is a vector representing the $i^{th}$ column of $V$, which spans the row space of $X$

  • $d$, such that $d \leq L$, is a rank of the trajectory matrix $X$. It can be regarded as the instrinsic dimensionality of the time series' trajectory space

  • $X_i=\sigma_iU_iV_i^T$ is the $i^{th}$ elementary matrix

  1. Randomized SVD

    This techinque is used for the approximation of the classic SVD and primarly used for SSA with large $N$ and in the case of MSSA where $X$ is high-dimensional because of stacking of trajectory matrices of each considered time series. The main purpose of this technique is to reduce the dimensionality of $U$ and $V$.

Eigentriple grouping and elementary matrix separation

Time Series Reconstruction

Forecasting

Clone this wiki locally