mbpls.mbpls module

Module contents

class mbpls.mbpls.MBPLS(n_components=2, full_svd=False, method='NIPALS', standardize=True, max_tol=1e-14, calc_all=True, sparse_data=False)
  • PLS1: Predict a response vector \(y\) from a single multivariate data block \(X\)

  • PLS2: Predict a response matrix \(Y\) from a single multivariate data block \(X\)

  • MBPLS: Predict a response vector/matrix \(Y\) from multiple data blocks \(X_1, X_2, ... , X_i\)

for detailed information check [ref]

methodstring (default ‘NIPALS’)

The method being used to derive the model attributes, possible are ‘UNIPALS’, ‘NIPALS’, ‘SIMPLS’ and ‘KERNEL’

n_componentsint

Number (\(k\)) of Latent Variables (LV)

standardizebool (default True)

Standardizing the data

full_svdbool (default True)

Using full singular value decomposition when performing SVD method. Set to ‘False’ when using very large quadratic matrices \(X\).

max_tolnon-negative float (default 1e-14)

Maximum tolerance allowed when using the iterative NIPALS algorithm

calc_allbool (default True)

Calculate all internal attributes for the used method. Some methods do not need to calculate all attributes, i.e. scores, weights etc., to obtain the regression coefficients used for prediction. Setting this parameter to false will omit these calculations for efficiency and speed.

sparse_databool (default False)

NIPALS is the only algorithm that can handle sparse data using the method of H. Martens and Martens (2001) (p. 381). If this parameter is set to ‘True’, the method will be forced to NIPALS and sparse data is allowed. Without setting this parameter to ‘True’, sparse data will not be accepted.

X-side

Ts_ : array, super scores \([n,k]\)

T_ : list, block scores \([i][n,k]\)

W_ : list, block weights \([i][p_i,k]\)

A_ : array, block importances/super weights \([i,k]\)

A_corrected_ : array, normalized block importances \(A_{corr,ik} = A_{ik} \cdot (1- \frac{p_i}{p})\)

P_ : list, block loadings \([i][p_i,k]\)

R_ : array, x_rotations \(R = W (P^T W)^{-1}\)

explained_var_x_ : list, explained variance in \(X\) per LV \([k]\)

explained_var_xblocks_ : array, explained variance in each block \(X_i\) \([i,k]\)

beta_ : array, regression vector \(\beta\) \([p,q]\)

Y-side

U_ : array, scoresInitialize \([n,k]\)

V_ : array, loadings \([q,k]\)

explained_var_y_ : list, explained variance in \(Y\) \([k]\)

Notes

According to literature one distinguishes between PLS1 [ref], PLS2 [ref] and MBPLS [ref]. Common goal is to find loading vectors \(p\) and \(v\) which project the data to latent variable scores \(ts\) and \(u\) indicating maximal covariance. Subsequently, the explained variance is deflated and further LVs can be extracted. Deflation for the \(k\)-th LV is obtained as:

\[X_{k+1} = X_{k} - t_k p_k^T\]

PLS1: Matrices are computed such that:

\[ \begin{align}\begin{aligned}X &= T_s P^T + E_X\\y &= X \beta + e\end{aligned}\end{align} \]

PLS2: Matrices are computed such that:

\[ \begin{align}\begin{aligned}X &= T_s P^T + E_X\\Y &= U V^T + E_Y\\Y &= X \beta + E\end{aligned}\end{align} \]

MBPLS: In addition, MBPLS provides a measure for how important (\(a_{ik}\)) each block \(X_i\) is for prediction of \(Y\) in the \(k\)-th LV. Matrices are computed such that:

\[ \begin{align}\begin{aligned}X &= [X_1|X_2|...|X_i]\\X_i &= T_s P_i ^T + E_i\\Y &= U V^T + E_Y\\Y &= X \beta + E\end{aligned}\end{align} \]

using the following calculation:

\(X_k = X\)

for k in K:

\[ \begin{align}\begin{aligned}w_{k} &= \text{first eigenvector of } X_k^T Y Y^T X_k, ||w_k||_2 = 1\\w_{k} &= [w_{1k}|w_{2k}|...|w_{ik}]\\a_{ik} &= ||w_{ik}||_2 ^2\\t_{ik} &= \frac{X_i w_{ik}}{||w_{ik}||_2}\\t_{sk} &= \sum{a_{ik} * t_{ik}}\\v_k &= \frac{Y^T t_{sk}}{t_{sk} ^T t_{sk}}\\u_k &= Y v_k\\u_k &= \frac{u_k}{||u_k||_2}\\p_k &= \frac{X^T t_{sk}}{t_{sk} ^T t_{sk}}, p_k = [p_{1k}|p_{2k}|...|p_{ik}]\\X_{k+1} &= X_k - t_{sk} p_k\end{aligned}\end{align} \]

End loop

\(P = [p_{1}|p_{2}|...|p_{K}]\)

\(T_{s} = [t_{s1}|t_{s2}|...|t_{sK}]\)

\(U = [u_{1}|u_{2}|...|u_{K}]\)

\(V = [v_{1}|v_{2}|...|v_{K}]\)

\(W = [w_{1}|w_{2}|...|w_{k}]\)

\(R = W (P^T W)^{-1}\)

\(\beta = R V^T\)

Examples

Quick Start: Two random data blocks \(X_1\) and \(X_2\) and a random reference vector \(y\) for predictive modeling.

import numpy as np
from mbpls.mbpls import MBPLS

mbpls = MBPLS(n_components=4)
x1 = np.random.rand(20,300)
x2 = np.random.rand(20,450)

y = np.random.rand(20,1)

mbpls.fit([x1, x2],y)
mbpls.plot(num_components=4)

y_pred = mbpls.predict([x1, x2])

More elaborate examples can be found at https://github.com/DTUComputeStatisticsAndDataAnalysis/MBPLS/tree/master/examples

check_sparsity_level(data)
explained_variance_score(X, Y)
fit(X, Y)

Fit model to given data

Parameters
  • X (list) – of all xblocks x1, x2, …, xn. Rows are observations, columns are features/variables

  • Y (array) – 1-dim or 2-dim array of reference values

fit_predict(X, Y, **fit_params)

Fit to data, then predict it.

fit_transform(X, y=None, **fit_params)

Fit the model and then, then transform the given data to lower dimensions.

plot(num_components=2)

Function that prints the fitted values of the instance. INPUT: num_components: Int or list

Int: The number of components that will be plotted, starting with the first component list: Indices or range of the components that should be plotted

predict(X)

Predict y based on the fitted model

Parameters

X (list) – of all xblocks x1, x2, …, xn. Rows are observations, columns are features/variables

Returns

  • y_hat (np.array)

  • Predictions made based on trained model and supplied X

r2_score(X, Y)
transform(X, Y=None, return_block_scores=False)

Obtain scores based on the fitted model

Parameters

Xlist

of arrays containing all xblocks x1, x2, …, xn. Rows are observations, columns are features/variables

(optional) Yarray

1-dim or 2-dim array of reference values

return_block_scores: bool (default False)

Returning block scores T_ when transforming the data

Returns

  • Super_scores (np.array)

  • Block_scores (list)

  • List of np.arrays containing the block scores

  • Y_scores (np.array (optional))

  • Y-scores, if y was given