1.Introduction

There are many packages about Tensor. For example, “rTensor”, “tensorA”, “tensorr”, “nnTensor” can all do tensor decompositions. Here we focused on common tensor operations and decompositions.

Among those packages, “rTensor” is the most complete and convenient package that can satisfy our need. So in this report,

Package Name Description Published Time Author
rTensor Tools for Tensor Analysis and Decomposition 2018-12-04 James Li and Jacob Bien and Martin Wells
tensor Tensor product of arrays 2012-05-05 Jonathan Rougier
tensorA Computations with tensors and datasets of vectors or matrices 2018-07-29 K. Gerald van den Boogaart
tensorr Provides methods to manipulate and store sparse tensors 2019-01-21 Robert Zamora
nnTensor Non-Negative Tensor Decomposition 2019-10-25 Koki Tsuyuzaki, Manabu Ishii, Itoshi Nikaido
catch Covariate-Adjusted Tensor Classification in High-Dimensions 2018-05-14 Yuqing Pan, Qing Mai, Xin Zhang
BaTFLED3D Bayesian Tensor Factorization Linked to External Data 2017-10-06 Nathan Lazar
MultiwayRegression Perform Tensor-on-Tensor Regression 2019-05-31 Eric F. Lock
PTAk Principal Tensor Analysis on k Modes 2019-02-06 Didier G. Leibovici
RDFTensor Different Tensor Factorization (Decomposition) Techniques for RDF Tensors (Three-Mode-Tensors) 2019-01-11 Abdelmoneim Amer Desouki
TCA Tensor Composition Analysis 2019-05-22 Elior Rahmani
tensorBF Bayesian Tensor Factorization 2018-10-02 Suleiman A Khan
tensorsparse Multiway Clustering via Tensor Block Models 2019-08-05 Miaoyan Wang, Yuchen Zeng
tensr Covariance Inference and Decompositions for Tensor Datasets 2018-08-15 David Gerard
tensorBSS Blind Source Separation Methods for Tensor-Valued Observations 2019-03-21 Joni Virta, Christoph Koesner, Bing Li, Klaus Nordhausen, Hannu Oja
Tlasso Non-Convex Optimization and Statistical Inference for Sparse Tensor Graphical Models 2016-09-19 Will Wei Sun, Zhaoran Wang, Xiang Lyu, Han Liu, Guang Cheng
TRES Tensor Regression with Envelope Structure and Three Generic Envelope Estimation Approaches 2019-10-22 Wenjing Wang, Jing Zeng, Xin Zhang
ttTensor Tensor-Train Decomposition 2019-03-06 Koki Tsuyuzaki, Manabu Ishii, Itoshi Nikaido

2.Tensor Application with “rTesnor”

A very nice introduction to “rTensor” can be found on (Li et al. 2018). Or the reference manual in details.

Here we introduce it parallel with the order of paper (Kolda 2006)

2.1.Create a Tensor

library("rTensor")
indices <- c(3, 4, 5)
arr <- array(rnorm(60), dim = indices)
tnsr <- as.tensor(arr)

Or equivalently,

tnsr <- new("Tensor",3L,c(3L,4L,5L),data=rnorm(60))

Or equivalently,

tnsr <- rand_tensor(modes = c(3, 4, 5))

2.2. Norm and Inner Product

Frobenius norm

fnorm(tnsr)
## [1] 8.466838
innerProd(tnsr, tnsr)
## [1] 71.68735

2.3. Matricization, Unfoldings and Vectorization

Same as matrix, we can slice tensor by given index

 tnsr[, 1:2, 1]
## Numeric Tensor of 2 Modes
## Modes:  3 2 
## Data: 
##            [,1]         [,2]
## [1,]  0.1908841 -2.040112709
## [2,] -0.6332525 -0.979242215
## [3,]  2.3292573 -0.002704994

Unfolding a tensor. The general matrix unfolding maps a subset of the modes as indices in the rows and the remaining modes as indices in the columns.

 unfold(tnsr, row_idx = c(1, 2), col_idx = c(3))

We can fold these matrix into Tensor

unfolded = unfold(tnsr, row_idx = c(1, 2), col_idx = c(3))
folded_back <- fold(unfolded, row_idx = c(1, 2), col_idx = c(3),modes = c(3, 4, 5))

k-mode unfolding in the mode k.

k_unfold(tnsr, m = 3)

We can fold these matrix into Tensor

unfolded = k_unfold(tnsr, m = 3)
folded_back <- k_fold(unfolded, m=3, modes = c(3, 4, 5))

Vectorization

vec(tnsr)

2.4. Tensor multiplication

k-mode Product: \(\mathcal{Y} = \mathcal{X} \times_{k} M \Longleftrightarrow \mathcal{Y}_{(k)} = M \cdot \mathcal{X}_{(k)},\)

tnsr <- rand_tensor(modes = c(4, 6, 8, 10))
mat <- matrix(rnorm(12), ncol = 6)
ttm(tnsr = tnsr, mat = mat, m = 2)
## Numeric Tensor of 4 Modes
## Modes:  4 2 8 10 
## Data: 
## [1] -0.3147260  1.3522976  1.7483720  1.6232871 -1.4235339  0.3148969

k-mode Product with a “list” of “matrix”: \(\mathcal{Y} = \mathcal{X} \times_{1} M_{1} \times_{2} M_{2} \cdots \times_{K} M_{K} \Longleftrightarrow \mathcal{Y}_{(k)} = M_{k}\cdot \mathcal{X}_{(k)} \cdot (M_{K}\otimes \cdots \otimes M_{K+1} \otimes M_{K-1} \cdots \otimes M_1)^{T}\)

mat2 <- matrix(rnorm(24), ncol = 8)
ttl(tnsr = tnsr, list_mat = list(mat, mat2), ms = c(2, 3))
## Numeric Tensor of 4 Modes
## Modes:  4 2 3 10 
## Data: 
## [1]   8.1999537   0.2925717   3.0038572  -2.0363110 -11.0784747  -5.3933881

2.5. Kronecker, Khatri-Rao, and Hadamard Product

Kronecker

kronecker(mat, mat2)

Kronecker product of a “list” of “matrix”

kronecker_list(list(mat,mat,mat))

Khatri-Rao

khatri_rao(mat, mat)

khatri-Rao product of a “list” of “matrix”

khatri_rao_list(list(mat,mat,mat))

Hadamard Product

hadamard_list(list(mat, mat))

2.6. CP Decomposition

The “cp”" function implements the classical alternating least squares method (Phan, Tichavsky et al. 2013) to compute the CP decomposition of a general K-tensor. Note that this algorithm is not guaranteed to converge to the global minimum.

cp1 <- cp(tnsr, num_components = 2, max_iter = 25, tol = 1e-05)
cp1$fnorm_resid

2.7. Tucker Decomposition

It uses the Alternating Least Squares (ALS) estimation procedure also known as Higher-Order Orthogonal Iteration (HOOI) to compute the Tucker decomposition.

tk1 <- tucker(tnsr,ranks=c(2,2,2,2))
tk1$fnorm_resid

Remark: If there is no truncation in one of the modes, then Tucker is the same as the MPCA. If there is no truncation in all the modes, then Tucker is the same as the HOSVD.

2.8. HOSVD Decomposition

The higher-order singular value decomposition (HOSVD),

tnsr <- rand_tensor(modes = c(2, 4, 6, 8))
hosvd1 <- hosvd(tnsr)

2.9. MPCA

Multilinear principal component analysis (MPCA) is a special case of the general Tucker decomposition for K-tensors, compressing on K-1 modes and leaving one mode uncompressed.

2.10. PVD and t-SVD

PVD was proposed by (Crainiceanu et al. 2013). The function “pvd” can do such decomposition in “rTensor”.

The t-SVD decomposition is based on the t-product. See details in (Li et al. 2018) and (Kilmer et al. 2013). The function “t_svd” can do such decomposition in “rTensor”.

Summary of Decomposition in “rTensor”

Function Tensor size Other parameters
cp \(n_1 \times n_2 \times \cdots \times n_K\) number of components \(r\),maximum number of iterations, convergence criterion
mpca \(n_1 \times n_2 \times \cdots \times n_K\) vector of ranks \(r = (r_1, \cdots , r_{K-1})\),maximum number of iterations, convergence criterion
tucker \(n_1 \times n_2 \times \cdots \times n_K\) vector of ranks \(r = (r_1, \cdots , r_K)\),maximum number of iterations, convergence criterion
pvd \(n_1 \times n_2 \times n_3\) vector of left ranks \(l = (l_1, \cdots , l_{n_3})\),vector of right ranks \(h = (h1, \cdots , h_{n_3} )\), final left rank \(r_1\), final right rank \(r_2\)
hosvd \(n_1 \times n_2 \times \cdots \times n_K\) optional: vector of ranks \(r = (r_1, \cdots , r_{K-1})\)
t_svd \(n_1 \times n_2 \times n_3\) none

3. Package “Tensor”

Sometimes this package is very useful in tensor arithmetic, especially when you need some kind of “loop sum”. However, when you search the reference manual for “tensor”, you can only find this manual. They only decribe tensor function by a single sentence: “The tensor product of two arrays is notionally an outer product of the arrays collapsed in specific extents by summing along the appropriate diagonals”, which is too consise. We give more details here.

Given matrix \(A,B\),

\(tensor(A,B,2,1) = A B\)

\(tensor(A,B,2,2) = A B^{T}\)

library(tensor)
A <- matrix(1:6, 2, 3)
B <- matrix(1:12, 4, 3)
identical(A %*% t(B), tensor(A, B, 2, 2))
## [1] TRUE

Given tensor \(\mathcal{X},\mathcal{Y} \in \mathbb{R}^{d_1 \times d_2 \times \cdots \times d_K}\), and matrix \(M_i \in \mathbb{R}^{n_i \times d_i}\)

\(\text{tensor}(\mathcal{X},M_i,i,2) \in \mathbb{R}^{d_1 \times \cdots d_{i-1} \times d_{i+1} \cdots \times d_K \times n_i}\), specifically,

\(\text{tensor}(\mathcal{X},M_i,i,2) = \text{fold}(\text{k_unfold}(\text{ttm} (\mathcal{X},M_i),i),K)\)

where “ttm”, “k_unfold” and “fold” are notations of function names in rTesnor, introduced earlier.

For example,

M <- B
X <- A %o% A
Y <- tensor(X, M, 2, 2)
dim(X)
## [1] 2 3 2 3
dim(M)
## [1] 4 3
dim(Y)
## [1] 2 2 3 4
temp <- ttm(as.tensor(X), B,m=2)
identical(as.tensor(Y), k_fold(k_unfold(temp,m=2),m=4,modes=c(2,2,3,4)))
## [1] TRUE

Given matrix \(U_i \in \mathbb{R}^{d_j \times d_i}\), \(\text{tensor}(\mathcal{X}, U_i, c(i,j),c(1,2)) \in \mathcal{R}^{d_1 \times \cdots d_{i-1} \times d_{i+1} \cdots d_{j-1} \times d_{j+1} \cdots d_K}\).

For example,

dim(Y)
## [1] 2 2 3 4
dim(M)
## [1] 4 3
tensor(Y,M,c(4,3),c(1,2))
##       [,1]  [,2]
## [1,] 22606 28552
## [2,] 28552 36064

Thus, in tensor function \(\text{tensor}(X,Y,m,n)\), we must have the index X[m] == Y[n].

This function can greatly simplify and speed up our codings.

For example, suppose we want to compute \(\sum_{i}A_iB_i\), instead of using big loop, we can create tensor \(\mathcal{X}, \mathcal{Y}\) corresponding to \(A,B\) by adding a new mode of index \(i\),

i.e \(X[i,:] = A_i\) and \(B[i,:] = B_i\).

Then just use \(\text{tensor}(A,B,c(1,3),c(1,2))\). Similarly, \(\sum_{i}A_iB_i^T = \text{tensor}(A,B,c(1,3),c(1,3))\).

Same logic can be extended to higher order tensor product such that this function can be greatly useful.

References:

[1] James Li, Jacob Bien, Martin T. Wells, “rTensor: An R Package for Multidimensional Array (Tensor) Unfolding, Multiplication, and Decomposition”, Journal of Statistical Software, November 2018, Volume 87, Issue 10. URL: https://www.jstatsoft.org/article/view/v087i10.

[2] Kolda TG, Bader BW (2009). “Tensor Decomposition and Applications.” SIAM Review, 51(3), 455-500. URL: https://doi:10.1137/07070111x.

[3] Kilmer M, Braman K, Hao N, Hoover R (2013). “Third-Order Tensors as Operators on Matrices: A Theoretical and Computational Framework with Applications in Imaging.” SIAM Journal on Matrix Analysis and Applications, 34(1), 148-172. URL: https://doi.org/10.1137/110837711.

[4] Lock EF, Nobel AB, Marron JS (2011). “Comment on”Population Value Decomposition, a Framework for the Analysis of Image Populations" by Crainiceanu et al." Journal of the American Statistical Association, 106(495), 798-802. URL: https://doi.org/10.1198/jasa.2011.ap10089.

[5] “Package ‘rTensor’”, URL: https://cran.r-project.org/web/packages/rTensor/rTensor.pdf.

[6] “Package ‘Tensor’”, URL: https://cran.r-project.org/web/packages/tensor/tensor.pdf.