Most nonmathematicians might think that tensor operations are pretty hard without any formal proof, but new results prove that they are NP Hard which is not good news if you are trying to work something out using them.
A tensor is the ndimensional generalization of a matrix. A matrix is a 2D tensor in that it has rows and columns. A 3D tensor is a cube of numbers with rows, columns and slices. In a matrix the numbers are indexed by two variables, e.g. something like a_{ij}; in a tensor the number of indexing variables can be more than two, e.g. a_{ijk} for a tensor of dimension, or more accurately rank 3.
We use matrix algebra and matrix numerical methods very often in almost any numerical program  it is the foundation of just about all the number crunching we do. We solve linear systems of equations, find eigenvalues, perform singular value decompositions and so on. These might be regarded as difficult math by some programmers, but for the majority of these tasks we have polynomial time algorithms. That is, vector and matrix computations are in P.
Tensor algebra generalizes the the linear algebra that we are all familiar with to higher dimensions  multilinear algebra. It turns up in advanced geometry, the theory of gravity  the general theory of relativity that is, numerical mechanics, and so on. It is also being used in AI and computer vision and a number of cutting edge approaches to all sorts of topics. The good thing about tensor algebra is that from an algorithmic point of view it isn't really anything new  in most cases just a few more indexes to cope with.
Surely the algorithms that we need to work with say rank3 tensors is no more difficult than for rank2 tensors, i.e. matrices? It has to be in P  right?
Well no.
According to a recent paper, things are much worse for rank3 tensors. It seems that they mark a dividing line between the linear convex problems that are tractable and the more difficult nonlinear nonconvex class. In the paper a whole list of generalizations of tractable matrix problems are shown to be NPhard in their rank 3 formulations. These include finding eigenvalues, finding approximate eigenvalues, symmetric eigenvalues, singular values, proving positive definiteness, approximating a tensor by rank 1 tensors and so on. There is even a mention of a generalized question about finding a matrix function which is shown to be undecidable for any tensor 20x20x2 or greater. This is a big surprise because the equivalent matrix function can be easily found and another hint that throwing in just one extra index to a matrix problem makes it very much more difficult. As the title of the paper puts it  Most tensor problems are NPhard.
Should we abandon tensor methods because most of them are not in P?
You need to keep in mind that these sorts of analysis are only valid asymptotically when the size of the tensor n in an n x n x n tensor gets very large. It could well be that for a range of small n we can find algorithms that get the job done in reasonable time. The fact that a problem is NPhard doesn't mean its impossible!
The paper concludes with some open questions and, if you are reasonable at math you should find it an interesting read.
Introduction To Artificial Intelligence  New On edX 15/09/2017
Microsoft has just launched a new, free, selfpaced course on the edX platform. Introduction to Artificial Intelligence sets out to show how Machine Learning provides the foundation for AI, and how yo [ ... ]

Popular Python Course Updated 28/08/2017
One of earliest introductory computer science courses on the edX platform has been given a facelift and now uses Python 3.5. The first of a twocourse sequence, it restarts on August 30th.
 More News 
