Tensor Notation

Throughout this text, we will be working in 3D. The way that the deformations of a body are described will make use of three-component tensors (i.e. vectors) that describe magnitude and direction. We require a vector that has only three components, because we will consider that as a body deforms, any point within the body undergoes a motion that is described by a simple translation, in 3D space. Once we begin looking at strains (and stresses), then we will have to consider the motion of elements, rather than points. Since elements can deform axially in three directions as well as undergo shear, the description of stresses and strains will require the use of higher order tensors (i.e. matrices).

“Tensor” notation (index notation) is essentially a way of keeping track of the components of matrices when performing matrix operations or algebra, without the need to repeatedly draw matrices. In this way, tensors save paper! In addition to this benefit, tensor notation (index notation) lends itself nicely to computer programming. Having said that, the majority of derivations in the later chapters will involve algebra that the reader is probably already familiar with. For example, the basic product between an nx1 vector and a 1xn vector produces an nxn matrix, while a dot product between the same two vectors would produce a scalar. Similarly, matrix products (denoted with a “\cdot” for reasons that are explained in this chapter), inverses, transposes, and other operations are probably already familiar as well. Basic properties of the operations (e.x. matrix multiplication is associative but not commutative) are also familiar. Thus, it is possible to jump right into the later chapters without studying tensors and only risk becoming “stuck” on the few derivations that operate explicitly on components of matrices.

Tensors (matrices and vectors) will be written in \mathbf{boldface}. When their components are of interest, alphabetical subscripts (“indices”) will be used, and the boldface removed, since each component of a tensor is merely a scalar. Most vectors will be written as lower-case English letters, while most higher-order tensors will be written using upper-case English letters or Greek letters. Only rectangular (Cartesian) coordinate systems will be considered.

So, how does tensor math work and in particular, what is “index notation?” Further, what specific tensor operations are going to be most important for us? These are the questions that will be addressed in this chapter.

Consider a typical rectangular coordinate system defined by the unit vectors \mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3 (see Figure):

e1

We know that any vector, \mathbf{a}, with components a_1, a_2, a_3, can be defined by the units vectors \mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3, as follows:

(1)   \begin{equation*} \mathbf{a}=a_1\mathbf{e_1}+a_2\mathbf{e_2}+a_3\mathbf{e_3}=\sum_{i=1}^3a_i\mathbf{e_i} \end{equation*}

Our convention (sometimes called the “Einstein Summation Convention”) will be to simply drop the \sum symbol. The subscript (or “index”) will be assumed to be summed. This is one of the important ideas of “tensor notation” (or “index notation”).


note: i goes from 1 to 2 in 2D and 1 to 3 in 3D


The dot product between two vectors can be written in index notation:

(2)   \begin{equation*} \mathbf{a} \cdot \mathbf{b}=a_1b_1+a_2b_2+a_3b_3=a_ib_i \end{equation*}

We know that the multiplication of a matrix and a vector results in a vector. The components of this vector can be expressed as follows:

(3)   \begin{equation*} \begin{bmatrix} x_1\\ x_2\\ x_3 \end{bmatrix} = \begin{bmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33} \end{bmatrix} \begin{bmatrix} b_1\\ b_2\\ b_3 \end{bmatrix} \Longrightarrow x_i=A_{ij}b_j  \text{(we'll derive soon)} \end{equation*}

In eq. 3, we are summing over “j” only, because the practice is to only sum over indices that are “repeated” within a given term in an expression. Another way to think about it, which will always work in this text unless otherwise stated, is to consider that “repeated indices” appear on only one side of the equation, which indicates that they should be summed. The “free index,” which in eq. 3 is “i,” appears on both the left-hand side and the right-hand side of the equation (it also only appears one time in any given term) and therefore we know not to sum over “i.”

Alternatively, consider:
A_{ij}b_j=\sum_{i,j=1}^3

A_{ij}b_j=A_{11}b_1+A_{12}b_2+A_{13}b_3+A_{21}b_1+A_{22}b_2+A_{23}b_3+
+A_{31}b_1+A_{32}b_2+A_{33}b_3=scalar


note: In eq. 3, “i” is a free index while “j” is a repeated/dummy index. Repeated indices summed while free indices are not.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.