We already know:
Similarly, we need to be able to express a higher order matrix, using tensor notation:
is sometimes written: , where “” denotes the “dyadic” or “tensor” product
As opposed to:
sum of 3 x 3 matrices = 3 x 3 matrix
Recall eq. 3 in Section 1: Tensor Notation, which states that , where is a 3×3 matrix, is a vector, and is the solution to the product between and . To see why this is so, we can consider the product between and the basis vector , and then consider the product between and an arbitrary vector . Before we do this, it should be mentioned that products between tensors and vectors (producing a vector) as well as products between two tensors (producing a tensor) will directly make use of the “dot product,” even though the “dot product” operator is sometimes thought of as a “scalar product.” The “dot product” treatment used consistently throughout this text, although mathematically a bit “sloppy,” enables one to use index notation to derive quantities in a very direct manner.
The “dot product” will be used in this text to signify the tensor product between a tensor and a vector or the tensor product between two tensors. This is consistent with most of the literature in solid mechanics. A small minority of authors, however, consider this to be “sloppy” math and insist, instead, that the dot product between tensors produce a scalar value. This distinction is very important since tensor products, which produce tensors, are prevalent in solid mechanics and this text will heavily use the “dot” convention when deriving important identities and quantities. Authors that use a different convention would use a very different approach for derivation, in particular where index notation is used.
Note the use of the Kronecker delta in simplifying the expression in eq. 3.
Again, the solution is a vector; this time with components , as expected (eq. 3 in Section 1: Tensor Notation).
note: Eq. 4 is standard matrix multiplication. We had to use a “dot” product to get the tensor algebra to work as a typical matrix – vector multiplication (we’ll see later that also results in the standard matrix – matrix product). This is consistent with most of the literature in solid mechanics.
note: Remember, in the above derivations, only if is non-zero. This was explained in eq. 2 in Section 1: Kronecker Delta. Note how this simplified the derivations. This is very common in tensor algebra.
So, consider eq. 3. Let’s “pick out” a particular component of this tensor, as follows:
note: The LHS of eq. 5 is only one term. I.e. this is how you pick out a single term of a matrix.
note: For order: , where gives the component of in the “” direction.
Consider the following coordinate system (see Figure):
(terms [components] of a rotation matrix)
To be more useful, we need to show that .
Multiply both sides by :
And, since (this should be obvious):
note: (this is true of any tensor)
is an “orthogonal” tensor. A particular identity of an orthogonal tensor can be written as follows:
For an orthogonal tensor, (obeys eq. 7), it can be shown that (pf can be found in [Asaro])
From eq. 6, we know that
In eq. 8,
(assuming we know the unit vectors defining the original and rotated coordinate systems).
What about transforming a tensor?
The following will be simply stated, and then proved:
Tensor product practice and quick proof of eq. 9:
Now taking to be , we find:
Setting , we can see that the components are – i.e. the desired result (eq. 9)
note: We put a “dot” to get the tensor algebra to work. It’s really just a standard matrix product. If ever a product is written or , in this text, it is probably a typo! Most of the literature in solid mechanics uses the same notation as in this text. Some authors omit the “dot” (e.x. ) except when using index notation. Some authors take to be the scalar product, which is a completely different definition, as opposed to a difference merely in notation. Fortunately, authors that take to be the scalar product are a small minority in solid mechanics.