# Introduction to Materials Science and Engineering

Last update:

## 1 Catalog Description

MAT_SCI 201 introduces the core topics and basic concepts of Materials Science and Engineering. We cover introductory materials processing, structure, properties, and performance with particular emphasis on the relationship between structure and properties. We focus on conventional materials classes: metals ceramics, and polymers, and discuss their various properties, such as mechanical, electronic, thermal, optical, magnetic, and electrochemical. Broader themes that arise are how materials’ performance influences technological development, the economy, the environment, and society. Prerequisites are Chem 131/151/171.

## 2 Course Outcomes

At the conclusion of the course students will be able to (broadly):
1. Correlate various materials properties (mechanical, optical, electronic) with materials structure and composition.
2. Describe how processing conditions can be controlled to produce difference structures and, consequently, tune materials properties and performance.
3. Select materials for various applications by assessing how the combination of materials prop-erties defines a material’s performance.
4. Understand the role materials have in facilitating technological development, the economy, the environment, and society.
These broad course-level outcomes are supplemented by 5-8 more topical outcomes at the beginning of module.

## 3 Math Primer

There are no specific mathematics prerequisites for MAT_SCI 201. However, success in this course does require the ability to employ basic algebra, vector manipulations, trigonometry, and calculus. No advanced mathematics (differential equations, linear algebra, etc.) is required.

### 3.1 Basic Rules for Exponents

You will often work with exponents and will have to apply operations to them. You will need to know the following:
 Operation Formula Example Multiplication: add exponents ${a}^{m}×{a}^{n}={a}^{m+n}$ ${x}^{2}×{x}^{3}={x}^{5}$ Dividing: subtract exponents $\frac{{a}^{m}}{{a}^{n}}={a}^{m-n}$ $\frac{{x}^{8}}{{x}^{3}}={x}^{5}$ Power to a power: multiply exponents $\left({a}^{m}{\right)}^{n}={a}^{mn}$ $\left({x}^{3}{\right)}^{4}={x}^{12}$ Power of a product: distribute power $\left(ab{\right)}^{m}={a}^{n}{b}^{n}$ $\left(2x{\right)}^{4}=16x$ power of a quotient: distribute power $\left(\frac{a}{b}{\right)}^{m}=\frac{{a}^{n}}{{b}^{n}}$ $\left(\frac{x}{5}{\right)}^{2}=\frac{{x}^{5}}{25}$ Negative exponents: make positive, shift across quotient line ${a}^{-n}=\frac{1}{{a}^{n}}$ or $\frac{1}{{a}^{-n}}={a}^{n}$ $3{x}^{-4}=\frac{3}{{x}^{4}}$ Zero exponents: always equal to 1 $\frac{{a}^{m}}{{a}^{m}}={a}^{0}=1$ $\frac{{x}^{0}}{4}=\frac{1}{4}$

### 3.2 Vectors

Working with vectors will be important when navigating crystal lattices. It is important that you recall the form and construction of these vectors as well as 1.) how to calculate the length of a vector, 2.) how to test for orthogonality between two vectors and 3.) how to calculate the angle between two vectors.
We'll be working in Cartesian coordinate system using an orthonormal basis set. The basis vectors are:
$\begin{array}{ccc}\stackrel{ˆ}{x}& =\left(1,0,0\right)& \left(3.1\right)\\ \stackrel{ˆ}{y}& =\left(0,1,0\right)& \left(3.2\right)\\ \stackrel{ˆ}{z}& =\left(0,0,1\right)& \left(3.3\right)\end{array}$
Any vector a can then be expressed in 3-dimensional space as:
$\begin{array}{cc}a={a}_{1}\stackrel{ˆ}{x}+{a}_{2}\stackrel{ˆ}{y}+{a}_{3}\stackrel{ˆ}{z}& \left(3.4\right)\end{array}$
Or, in column notation:
$\begin{array}{cc}a=\left[\begin{array}{c}{a}_{1}\\ {a}_{2}\\ {a}_{3}\end{array}\right]& \left(3.5\right)\end{array}$
In this class, we will often use crystallographic convention, in which notation for a lattice vector (you'll see this in Ch. 2) is condensed to $\left[uvw\right]$. More on that later.
You should know how to add and subtract vectors. For example, the addition of the vectors a and b:
$\begin{array}{cc}a+b=\left({a}_{1}+{b}_{1}\right)\stackrel{ˆ}{x}+\left({a}_{2}+{b}_{2}\right)\stackrel{ˆ}{y}+\left({a}_{3}+{b}_{3}\right)\stackrel{ˆ}{z}& \left(3.6\right)\end{array}$
Subtraction is similar, of course.
You should also know how to calculate the length of a vector. This is:
$\begin{array}{cc}|a|=\sqrt{{a}_{1}^{2}+{a}_{2}^{2}+{a}_{3}^{2}}& \left(3.7\right)\end{array}$
Or, if you are more comfortable putting this in terms of the dot-product:
$\begin{array}{cc}|a|=\sqrt{a\cdot a}& \left(3.8\right)\end{array}$
Finally, it's important to calculate the angle (or at least the cosine of an angle) between two vectors, a and b, which can be done using the definition of the scalar product:
$\begin{array}{ccc}a\cdot b& =|a||b|cos\theta & \left(3.9\right)\\ cos\theta & =\frac{a\cdot b}{|a||b|}& \left(3.10\right)\\ cos\theta & =\frac{{a}_{1}{b}_{1}+{a}_{2}{b}_{2}+{a}_{3}{b}_{3}}{\sqrt{{a}_{1}^{2}+{a}_{2}^{2}+{a}_{3}^{2}}\sqrt{{b}_{1}^{2}+{b}_{2}^{2}+{b}_{3}^{2}}}& \left(3.11\right)\end{array}$
When $a\cdot b=0$, $cos\theta =1$ and $\theta =\pi /2$ or 90${}^{○}$. In this case the vectors are orthogonal.

### 3.3 Differential and Integral Notation

We will generally employ Leibniz's notation for differentiation and anti-differentiation. The derivative of a function of one variable, e.g. $f\left(x\right)=f$, where $x$ is the independent variable, is written:
And higher-order derivatives are written as:
$\begin{array}{cc}\frac{{d}^{2}f}{d{x}^{2}},\phantom{\rule{6px}{0ex}}\frac{{d}^{3}f}{d{x}^{3}},...,\frac{{d}^{n}f}{d{x}^{n}}.& \left(3.13\right)\end{array}$
You will encounter a partial differential equation during this course that describes time-diffusion in one spacial dimension (Fick's second law). You will not be required to solve this equation, but you will have to use it. Partial differential equations with multiple variables use the same notation as above, but are utilize the with the $\partial$ character. Here we define the $g\left(x,t\right)=g$, where $x$ and $t$ are independent variables:
And higher-order derivatives taken with respect to the same variable are written as:
$\begin{array}{cc}\frac{{\partial }^{2}g}{\partial {x}^{2}},\phantom{\rule{6px}{0ex}}\frac{{\partial }^{3}g}{\partial {x}^{3}},...,\frac{{\partial }^{n}g}{\partial {x}^{n}}.& \left(3.15\right)\end{array}$
Antidifferentiation will be denoted using the integral symbol, e.g. for the definite integration of ${x}^{2}$ from $a$ to $b$:
$\begin{array}{cc}{\int }_{a}^{b}{x}^{2}dx& \left(3.16\right)\end{array}$
After integration, evaluation of this definite integral is written as:
$\begin{array}{cc}{.\frac{{x}^{3}}{3}|}_{a}^{b}& \left(3.17\right)\end{array}$
Below, we use Lagrange shorthand to denote derivatives, i.e. $\frac{d}{dx}f=f\text{'}\left(x\right)$.

### 3.4 Differentiation

The following differentiation rules may used at some point during the course. Note that $c$ is a constant. We will not require you to differentiate trigonometric or hyperbolic functions.

#### 3.4.1 General Formulas

$\begin{array}{ccc}& \frac{d}{dx}\left(c\right)=0& \left(3.18\right)\\ & \frac{d}{dx}\left[f\left(x\right)+g\left(x\right)\right]=f\text{'}\left(x\right)-g\text{'}\left(x\right)& \left(3.19\right)\\ & \frac{d}{dx}\left[g\left(x\right)f\left(x\right)\right]=f\left(x\right)g\text{'}\left(x\right)+g\left(x\right)f\text{'}\left(x\right)& \left(3.20\right)\\ & \frac{d}{dx}f\left(g\left(x\right)\right)=f\text{'}\left(g\left(x\right)\right)g\text{'}\left(x\right)& \left(3.21\right)\\ & \frac{d}{dx}\left[cf\left(x\right)\right]=cf\text{'}\left(x\right)& \left(3.22\right)\\ & \frac{d}{dx}\left[\frac{f\left(x\right)}{g\left(x\right)}\right]=\frac{g\left(x\right)f\text{'}\left(x\right)-f\left(x\right)f\text{'}\left(x\right)}{\left[g\left(x\right){\right]}^{2}}& \left(3.23\right)\\ & \frac{d}{dx}{x}^{n}=n{x}^{n-1}& \left(3.24\right)\end{array}$

#### 3.4.2 Exponents and Logarithmic Functions

$\begin{array}{ccc}& \frac{d}{dx}{e}^{x}={e}^{x}& \left(3.25\right)\\ & \frac{d}{dx}{a}^{x}={a}^{x}lna& \left(3.26\right)\\ & \frac{d}{dx}ln|x|=\frac{1}{x}& \left(3.27\right)\\ & \frac{d}{dx}{log}_{a}x=\frac{1}{xlna}& \left(3.28\right)\end{array}$

### 3.5 Integration

The following integration rules may used at some point during the course. Note that $C$ is a constant. We will not require you to perform integrations that may involve trigonometric or hyperbolic functions.

#### 3.5.1 Basic Forms

$\begin{array}{ccc}& \int {u}^{n}du=\frac{{u}^{n+1}}{n+1}+C,n\ne -1& \left(3.29\right)\\ & \int {u}^{-1}du=ln|u|+C& \left(3.30\right)\\ & \int {e}^{u}du={e}^{u}+C& \left(3.31\right)\\ & \int {a}^{u}du=\frac{{a}^{u}}{ln\left(a\right)}+C& \left(3.32\right)\end{array}$

### 3.6 Logarithmic Identities

The following logarithmic identities may be used in class. If so, they will be supplied on your equation sheet.
$\begin{array}{ccc}& log\left(xy\right)=log\left(x\right)+log\left(y\right)& \left(3.33\right)\\ & log\left(\frac{x}{y}\right)=log\left(x\right)-log\left(y\right)& \left(3.34\right)\\ & log\left({x}^{d}\right)=dlog\left(x\right)& \left(3.35\right)\\ & log\left(\sqrt[y]{x}\right)=\frac{log\left(x\right)}{y}& \left(3.36\right)\\ & log\left({x}^{c}{y}^{d}\right)=log\left({x}^{c}\right)+log\left({y}^{d}\right)=c\phantom{\rule{6px}{0ex}}log\left(x\right)+dlog\left(y\right)& \left(3.37\right)\end{array}$
html version of math as a test:
<!DOCTYPE html>

<html>

<meta charset="utf-8" />
<meta name="generator" content="pandoc" />
<meta http-equiv="X-UA-Compatible" content="IE=EDGE" />

<title>03-mathematics.knit</title>

<script src="data:application/javascript;base64,Ly8gUGFuZG9jIDIuOSBhZGRzIGF0dHJpYnV0ZXMgb24gYm90aCBoZWFkZXIgYW5kIGRpdi4gV2UgcmVtb3ZlIHRoZSBmb3JtZXIgKHRvCi8vIGJlIGNvbXBhdGlibGUgd2l0aCB0aGUgYmVoYXZpb3Igb2YgUGFuZG9jIDwgMi44KS4KZG9jdW1lbnQuYWRkRXZlbnRMaXN0ZW5lcignRE9NQ29udGVudExvYWRlZCcsIGZ1bmN0aW9uKGUpIHsKICB2YXIgaHMgPSBkb2N1bWVudC5xdWVyeVNlbGVjdG9yQWxsKCJkaXYuc2VjdGlvbltjbGFzcyo9J2xldmVsJ10gPiA6Zmlyc3QtY2hpbGQiKTsKICB2YXIgaSwgaCwgYTsKICBmb3IgKGkgPSAwOyBpIDwgaHMubGVuZ3RoOyBpKyspIHsKICAgIGggPSBoc1tpXTsKICAgIGlmICghL15oWzEtNl0kL2kudGVzdChoLnRhZ05hbWUpKSBjb250aW51ZTsgIC8vIGl0IHNob3VsZCBiZSBhIGhlYWRlciBoMS1oNgogICAgYSA9IGguYXR0cmlidXRlczsKICAgIHdoaWxlIChhLmxlbmd0aCA+IDApIGgucmVtb3ZlQXR0cmlidXRlKGFbMF0ubmFtZSk7CiAgfQp9KTsK"></script>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<style>h1 {font-size: 34px;}
h1.title {font-size: 38px;}
h2 {font-size: 30px;}
h3 {font-size: 24px;}
h4 {font-size: 18px;}
h5 {font-size: 16px;}
h6 {font-size: 12px;}
code {color: inherit; background-color: rgba(0, 0, 0, 0.04);}
pre:not([class]) { background-color: white }</style>

<style type="text/css">
code{white-space: pre-wrap;}
span.smallcaps{font-variant: small-caps;}
span.underline{text-decoration: underline;}
div.column{display: inline-block; vertical-align: top; width: 50%;}
div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
</style>

<style type="text/css">code{white-space: pre;}</style>
<script type="text/javascript">
if (window.hljs) {
hljs.configure({languages: []});
window.setTimeout(function() { hljs.initHighlighting(); }, 0);
}
}
</script>

<style type="text/css">
.main-container {
max-width: 940px;
margin-left: auto;
margin-right: auto;
}
img {
max-width:100%;
}
.tabbed-pane {
}
.html-widget {
margin-bottom: 20px;
}
button.code-folding-btn:focus {
outline: none;
}
summary {
display: list-item;
}
pre code {
}
</style>

<!-- tabsets -->

<style type="text/css">
.tabset-dropdown > .nav-tabs {
display: inline-table;
max-height: 500px;
min-height: 44px;
overflow-y: auto;
border: 1px solid #ddd;
}

.tabset-dropdown > .nav-tabs > li.active:before {
content: "";
font-family: 'Glyphicons Halflings';
display: inline-block;
border-right: 1px solid #ddd;
}

.tabset-dropdown > .nav-tabs.nav-tabs-open > li.active:before {
content: "";
border: none;
}

.tabset-dropdown > .nav-tabs.nav-tabs-open:before {
content: "";
font-family: 'Glyphicons Halflings';
display: inline-block;
border-right: 1px solid #ddd;
}

.tabset-dropdown > .nav-tabs > li.active {
display: block;
}

.tabset-dropdown > .nav-tabs > li > a,
.tabset-dropdown > .nav-tabs > li > a:focus,
.tabset-dropdown > .nav-tabs > li > a:hover {
border: none;
display: inline-block;
background-color: transparent;
}

.tabset-dropdown > .nav-tabs.nav-tabs-open > li {
display: block;
float: none;
}

.tabset-dropdown > .nav-tabs > li {
display: none;
}
</style>

<!-- code folding -->

<body>

<div class="container-fluid main-container">

</div>

<div id="mathematics" class="section level1">
<h1>Mathematics</h1>
;The text below is not a rigorous approach to the mathematical theory, nor is it a wholly systematic or comprehensive description of the topics covered. It is a selection of topics recommended by core instructors. These include mathematical concepts and procedures that will be encountered in your core courses, and instructors expect you to be familiar with them prior to the beginning of a course, i.e, they will not cover them in detail. Instead, use the content below as a reference as these topics arise and as a platform for more in-depth study. Please contact <a href="mailto:jonathan.emery@northwestern.edu" class="email">jonathan.emery@northwestern.edu</a> with suggestions for additional material to be included in this section.;
;For those who have never seen the mathematics below or who are not comfortable with the material, further preparation may be necessary. Options for those students include: either a.) enroll in ES-APPM-311-1 and ES-APPM-311-2: Methods of Applied Mathematics and/or b.) utilizing the suggested resources for supplemental study.;
<div id="linear-algebra" class="section level2">
<h2>Linear Algebra</h2>
;Linear algebra is a branch of mathematics that is central to physical description in Materials Science as it concerns the description of vectors spaces and is used in solving systems of equations. Materials Science graduate students will encounter application of linear algebra in all core courses. The sections below outline basic linear algebra concepts.;
<div id="linear-systems" class="section level3">
<h3>Linear Systems</h3>
</div>
<div id="gauss-elimination-release-tbd" class="section level3">
<h3>Gauss Elimination (Release TBD)</h3>
<div id="matrix-algebra-and-operations-release-tbd" class="section level4">
<h4>MATrix Algebra and Operations (Release TBD)</h4>
</div>
</div>
<div id="linear-transformations-release-tbd" class="section level3">
<h3>Linear Transformations (Release TBD)</h3>
</div>
<div id="determinants-release-tbd" class="section level3">
<h3>Determinants (Release TBD)</h3>
</div>
<div id="eigenvalues-and-eigenvectors-release-tbd" class="section level3">
<h3>Eigenvalues and Eigenvectors (Release TBD)</h3>
</div>
<div id="linear-differential-equations-release-tbd" class="section level3">
<h3>Linear Differential Equations (Release TBD)</h3>
<ol style="list-style-type: decimal">
<li>;Linear Differential Operators;</li>
<li>;Linear Differential Equations;</li>
</ol>
</div>
</div>
<div id="subsec:Tensors" class="section level2">
<h2>Tensors (Release 1/2017)</h2>
;Tensors are mathematical objects that define relationships between scalars, vectors, matrices, and other tensors=. Tensors are represented as <em>arrays</em> of various dimensionality (defined by rank or order). The moniker “tensor” in general suggests a higher-rank array (most often <span class="math inline">$$\geq$$</span><!-- -->3 dimensions), but scalars, vectors, and matrices are also tensors.;
;In the MSE graduate core, students will encounter tensors of various rank. In physical science, tensors characterize the properties of a physical system. Tensors are the <em>de facto</em> tool used to describe, for example, diffusion, nucleation and growth, states of stress and strain, Hamiltonians in quantum mechanics, and many, many, more physical phenomenon. Physical processes of interest to Materials Scientists take place in Euclidean 3-space (<span class="math inline">$${\rm I\!R}^3$$</span>) are are well-described by tensor representations.;
;We build up our description of the handling of tensors starting by separately describing rank-0, rank-1, rank-2, and rank-3 tensors. Tensors of lower ranks should be familiar — students will have encountered them previously as scalars (rank-0), vectors (rank-1), and matrices (rank-2). The term <em>tensors</em> typically denotes arrays of higher dimensionality (rank <span class="math inline">$$\geq3$$</span>). Physical examples include the rank-2 <a href="https://en.wikipedia.org/wiki/Cauchy_stress_tensor">Cauchy stress tensor</a> which describes the stress state of a at a point within a material), the rank-3 piezoelectric tensor (which relates the dielectric polarization of a material to a stress state), and the rank-4 stiffness tensor (which relates strain state and stress state in a system that obeying Hooke’s law).;
;Classifications of tensors by rank allows us to quickly identify the number of tensor components we will work with: a tensor of order <span class="math inline">$$p$$</span> has <span class="math inline">$$N^p$$</span> components, where <span class="math inline">$$N$$</span> is the dimensionality of space in which we are operating. In general, you will be operating in Eucledian 3-space, so the number of components of a tensor is defined as <span class="math inline">$$3^p$$</span>.;
;<strong>Scalars</strong> are considered tensors with <em>order</em> or <em>rank</em> of 0. Scalars represent physical quantities (often accompanied by a unit of measurement) that possess only a magnitude: e.g., temperature, mass, charge, and distance. Scalars are typically represented by Latin or Greek symbols and have <span class="math inline">$$3^{0} = 1$$</span> component.;
;<strong>Vectors</strong> are tensors with a <em>rank</em> of 1. In symbolic notation, vectors are typically represented using lowercase bold or bold-italic symbols such as <span class="math inline">$$\mathbf{u}$$</span> or <span class="math inline">$$\pmb{a}$$</span>. Bold typeface is not always amenable to handwriting, however, and so the a right arrow accent is employed: <span class="math inline">$$\vec{u}$$</span> or <span class="math inline">$$\vec{a}$$</span>. Students are likely to encounter various conventions depending on their field of study.;
;In <span class="math inline">$${\rm I\!R}^3$$</span> a vector is defined by <span class="math inline">$$3^{1} = 3$$</span> components. In <em>xyz</em> Cartesian coordinates we utilize the Cartesian basis with 3 orthogonal unit vectors <span class="math inline">$$\{\mathbf{e}_{\mathbf{x}}\text{, } \mathbf{e}_{\mathbf{y}}\text{, } \mathbf{e}_{\mathbf{z}}\}$$</span>. We define 3D vector <span class="math inline">$$\mathbf{u}$$</span> in this basis with the components (<span class="math inline">$$u_x$$</span>, <span class="math inline">$$u_y$$</span>, <span class="math inline">$$u_z$$</span>), or equivalently (<span class="math inline">$$u_1$$</span>, <span class="math inline">$$u_2$$</span>, <span class="math inline">$$u_3$$</span>). Often, we represent the vector <span class="math inline">$$\mathbf{u}$$</span> using the shorthand <span class="math inline">$$u_i$$</span>, where the <span class="math inline">$$i$$</span> subscript denotes an index that ranges over the dimensionality of the system (1,2,3 for <span class="math inline">$${\rm I\!R}^3$$</span>, 1,2 for <span class="math inline">$${\rm I\!R}^2$$</span>).;
;Vectors are often encountered in a bracketed vertical list to facilitate matrix operations. Using some of the notation defined above:;
;<span class="math display">$\mathbf{u} = u_i = \begin{bmatrix} u_x \\ u_y \\ u_z \end{bmatrix} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix} \label{eq:Vector}$</span>;
;<strong>Matrices</strong> are tensors with a <em>rank</em> of 2. In <span class="math inline">$${\rm I\!R}^2$$</span> a matrix has <span class="math inline">$$2^{2} = 4$$</span> components and in <span class="math inline">$${\rm I\!R}^3$$</span> a matrix has <span class="math inline">$$3^{2} = 9$$</span> components. As with vectors, we will use the range convention when denoting a matrix, which now possesses two subscripts, <span class="math inline">$$i$$</span> and <span class="math inline">$$j$$</span>. We use the example of the true stress, or <a href="https://en.wikipedia.org/wiki/Cauchy_stress_tensor">Cauchy stress tensor</a>, <span class="math inline">$$\sigma_{ij}$$</span>:;
;<span class="math display">$\sigma_{ij} = \begin{bmatrix} \sigma_{xx} &amp; \sigma_{xy} &amp; \sigma_{xz}\\ \sigma_{yx} &amp; \sigma_{yy} &amp; \sigma_{yz}\\ \sigma_{zx} &amp; \sigma_{zy} &amp; \sigma_{zz}\\ \end{bmatrix}$</span>;
;Where the diagonal represents the normal components of stress and the off-diagonal represents the shear components of the stress. In this notation the first index denotes the row while the second denotes the column (<span class="math inline">$$x = 1$$</span>, <span class="math inline">$$y = 2$$</span>, <span class="math inline">$$z = 3$$</span>).;
;<strong>Tensors</strong> A rank-3 tensor in <span class="math inline">$${\rm I\!R}^3$$</span> has <span class="math inline">$$3^{3} = 27$$</span> components and is represented in range notation using subscripts <span class="math inline">$$i$$</span>, <span class="math inline">$$j$$</span>, and <span class="math inline">$$k$$</span>, e.g., <span class="math inline">$$T_{ijk}$$</span> . At rank-3 (and it is even worse in rank-4, requiring an array of rank-3 tensors) it begins to become difficult to represent clearly on paper. An example of a simple tensor — <a href="https://en.wikipedia.org/wiki/Levi-Civita_symbol#Three_dimensions_2">the rank-3 permutation tensor</a> — is shown in Fig. <a href="#fig:PermutationTensor" reference-type="ref" reference="fig:PermutationTensor">1</a>. You can also watch <a href="https://www.youtube.com/watch?v=f5liqUk0ZTw">this video</a> which helps with the visualization.;
<div class="figure">
<p class="caption">The rank-3 permutation tensor, by Arian Kriesch. corrections made by Xmaster1123 and Luxo (Own work) [GFDL (<a href="http://www.gnu.org/copyleft/fdl.html" class="uri">http://www.gnu.org/copyleft/fdl.html</a>), CC-BY-SA-3.0 (<a href="http://creativecommons.org/licenses/by-sa/3.0/" class="uri">http://creativecommons.org/licenses/by-sa/3.0/</a>);
</div>
;One can write the <span class="math inline">$$i = 1,2,3$$</span> matrices that stack to form this tensor as:;
;<span class="math display">$\epsilon_{1jk}= \begin{bmatrix} 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 1\\ 0 &amp; -1 &amp; 0 \end{bmatrix}$</span>;
;<span class="math display">$\epsilon_{2jk}= \begin{bmatrix} 0 &amp; 0 &amp; -1\\ 0 &amp; 0 &amp; 0\\ 1 &amp; 0 &amp; 0 \end{bmatrix}$</span>;
;<span class="math display">$\epsilon_{3jk}= \begin{bmatrix} 0 &amp; 1 &amp; 0\\ -1 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 \end{bmatrix}$</span>;
</div>
<div id="sec:SummationNotation" class="section level2">
<h2>Summation Notation</h2>
;Often, it is useful to simplify notation when manipulating tensor equations. To do this, we utilize Einstein summation notation, or simply <em>summation notation</em>. This notation says that <em>if an index is repeated twice (and only twice) in a single term we assume summation over the range of the repeated subscript</em>. The simplest example of this is the representation of the trace of a matrix:;
;<span class="math display">$tr(\sigma) = \underbrace{\sigma_{kk}}_{\substack{\text{summation} \\ \text{notation}}} = \sum_{k}^{3}\sigma_{kk} = \sigma_{11}+\sigma_{22}+\sigma_{33}$</span>;
;In <span class="math inline">$$\sigma_{kk}$$</span> the index <span class="math inline">$$k$$</span> is repeated, and this means that we assume summation of the index over the range of the subscript (in this case, 1-3 as we are working with the stress tensor).;
<div class="displayquote">
;<strong>Example 1:</strong>This comes in very useful when representing matrix multiplication. Let’s say we have an (<span class="math inline">$$M \times N$$</span>) matrix, <span class="math inline">$$\mathbf{A} = a_{ij}$$</span> and an <span class="math inline">$$R \times P$$</span> matrix <span class="math inline">$$\mathbf{B} = b_{ij}$$</span>. We know from linear algebra that the matrix product <span class="math inline">$$\mathbf{AB}$$</span> is defined only when <span class="math inline">$$R = N$$</span>, and the result is a (<span class="math inline">$$M \times P$$</span>) matrix, <span class="math inline">$$\mathbf{C} = c_{ij}$$</span>. Here’s an example with a (<span class="math inline">$$2 \times 3$$</span>) matrix times a (<span class="math inline">$$3 \times 2$$</span>) in conventional representation:;
;<span class="math display">\begin{aligned} \mathbf{AB} = \begin{bmatrix} a_{11} &amp; a_{12} &amp; a_{13}\\ a_{21} &amp; a_{22} &amp; a_{23}\\ \end{bmatrix} &amp;\begin{bmatrix} b_{11} &amp; b_{12}\\ b_{21} &amp; b_{22}\\ b_{31} &amp; b_{32}\\ \end{bmatrix} = \\ &amp;\begin{bmatrix} a_{11}b_{11} + a_{12}b_{21} + a_{13}b_{31} &amp; a_{11}b_{12} + a_{12}b_{22} + a_{13}b_{32}\\ a_{21}b_{11} + a_{22}b_{21} + a_{23}b_{31} &amp; a_{21}b_{12} + a_{22}b_{22} + a_{23}b_{32}\\ \end{bmatrix} =c_{ij}\end{aligned}</span>;
;Here, we can use summation notation to greatly simply the expression. The components of the matrix <span class="math inline">$$c_{ij}$$</span> are <span class="math inline">$$c_{11}$$</span>, <span class="math inline">$$c_{12}$$</span>, <span class="math inline">$$c_{21}$$</span>, and <span class="math inline">$$c_{22}$$</span> and are defined:;
;<span class="math display">\begin{aligned} c_{11} = a_{11}b_{11} + a_{12}b_{21} + a_{13}b_{31}\\ c_{12} = a_{11}b_{12} + a_{12}b_{22} + a_{13}b_{32}\\ c_{21} = a_{21}b_{11} + a_{22}b_{21} + a_{23}b_{31}\\ c_{22} = a_{21}b_{12} + a_{22}b_{22} + a_{23}b_{32}\\\end{aligned}</span>;
;These terms can all be represented using the following expression:;
;<span class="math display">$c_{ij} = \sum_{k=1}^{3} a_{ik}b_{kj} = a_{i1}b_{1j} + a_{i2}b_{2j} + a_{i3}b_{3j}$</span>;
;So, in general for any matrix product:;
;<span class="math display">$c_{ij} = \sum_{k=1}^{N} a_{ik}b_{kj} = a_{i1}b_{1j} + a_{i2}b_{2j} + \cdots + a_{iN}b_{Nj} \label{eq:MatrixMultiply}$</span>;
;Or, by dropping the summation symbol and fully utilizing the summation convention:;
;<span class="math display">$c_{ij} = a_{ik}b_{ki}$</span>;
;Note that the term <span class="math inline">$$c_{ij}$$</span> <em>has no repeated subscript - there is no summation implied here. It is simply a matrix</em>. Summation <em>is</em> implied in the <span class="math inline">$$a_{ik}b_{kj}$$</span> term because of the repeated index <span class="math inline">$$k$$</span>, often called the dummy index.;
</div>
<div class="displayquote">
;<strong>Example 2:</strong> Another example is a <span class="math inline">$$3 \times 3$$</span> matrix multiplied by a (3 ) column vector:;
;<span class="math display">$\begin{bmatrix} a_{11} &amp; a_{12} &amp; a_{13}\\ a_{21} &amp; a_{22} &amp; a_{23}\\ a_{31} &amp; a_{32} &amp; a_{33}\\ \end{bmatrix} \begin{bmatrix} b_1\\ b_2\\ b_3\\ \end{bmatrix} = \begin{bmatrix} a_{11}b_{1} + a_{12}b_{2} + a_{13}b_{3} \\ a_{21}b_{1} + a_{22}b_{2} + a_{23}b_{3}\\ a_{31}b_{1} + a_{32}b_{2} + a_{33}b_{3}\\ \end{bmatrix} = a_{ij}b_{j}$</span>;
</div>
<table>
<caption>Uses of summation notation that students may encounter in the graduate core. Bracketed symbols indicate <span class="math inline">$$3 \times 3$$</span> matrices (tab:SummationIdentities)</caption>
<colgroup>
<col width="33%" />
<col width="40%" />
<col width="25%" />
</colgroup>
<th>Summation convention</th>
<th>Non-summation Convention</th>
<th>Full expression</th>
</tr>
<tbody>
<tr class="odd">
<td><span class="math inline">$$\lambda = a_ib_i$$</span></td>
<td><span class="math inline">$$\lambda = \sum\limits_{i=1}^{3}a_ib_i$$</span></td>
<td><span class="math inline">$$\lambda = a_1b_1 + a_2b_2 + a_3b_3$$</span></td>
</tr>
<tr class="even">
<td><span class="math inline">$$c_i = S_{ik}x_k$$</span></td>
<td><span class="math inline">$$c_i = \sum\limits_{i=1}^{3}S_{ik}x_k$$</span></td>
<td><span class="math inline">$$c_i = \begin{cases} c_1 = S_{11}x_1 + S_{12}x_2 + S_{13}x_3\\ c_2 = S_{21}x_1 + S_{22}x_2 + S_{23}x_3\\ c_3 = S_{31}x_1 + S_{32}x_2 + S_{33}x_3\\ \end{cases}$$</span></td>
</tr>
<tr class="odd">
<td><span class="math inline">$$\lambda = S_{ij}S_{ij}$$</span></td>
<td><span class="math inline">$$\lambda = \sum\limits_{j=1}^{3}\sum\limits_{i=1}^{3}S_{ij}S_{ij}$$</span></td>
<td><span class="math inline">$$\lambda = S_{11}S_{11} + S_{12}S_{12} + \cdots + S_{32}S_{32}+S_{33}S_{33}$$</span></td>
</tr>
<tr class="even">
<td><span class="math inline">$$C_{ij} = A_{ik}B_{kj}$$</span></td>
<td><span class="math inline">$$\lambda = \sum\limits_{k=1}^{3}A_{ik}B_{kj}$$</span></td>
<td><span class="math inline">$$\big[C\big]=\big[A\big]\big[B\big]$$</span></td>
</tr>
<tr class="odd">
<td><span class="math inline">$$C_{ij} = A_{ki}B_{kj}$$</span></td>
<td><span class="math inline">$$\lambda = \sum\limits_{k=1}^{3}A_{ki}B_{kj}$$</span></td>
<td><span class="math inline">$$\big[C\big]=\big[A\big]^{T}\big[B\big]$$</span></td>
</tr>
</tbody>
</table>
;It will be important to learn how to read such summation notation, so if you see a repeated dummy index (often represented with <span class="math inline">$$k$$</span> or <span class="math inline">$$l$$</span>, see Cai and Nix, 2.1.3), that you can recognize the notation.;
;Some useful representations of summation notation are shown in Table <a href="#tab:SummationIdentities" reference-type="ref" reference="tab:SummationIdentities">1</a>:;
;In future releases I will add summation notation for the Kronecker delta, <span class="math inline">$$\delta_{ij}$$</span>, the LeviCivita <span class="math inline">$$\epsilon$$</span>, the dot product, and the cross product, determinants, the <code>del</code> operator (<span class="math inline">$$\nabla$$</span>), and others as references.;
</div>
<div id="coordinate-transformations-release-12017" class="section level2">
<h2>Coordinate Transformations (Release 1/2017)</h2>
;Cartesian coordinates are not the only coordinate system that MSE graduate students will encounter in the core. Cylindrical coordinates and spherical coordinates are both useful in, for example, describing stress and strain fields around dislocations and vacancies.;
;<strong>Cartesian</strong> coordinates, as mentioned in Sec. <a href="#subsec:Tensors" reference-type="ref" reference="subsec:Tensors">1.2</a> utilize an orthogonal basis set and are often the easiest to use when describing and visualizing vector operations and physical laws. The rank-2 stress tensor (introduced in Sec. <a href="#subsec:Tensors" reference-type="ref" reference="subsec:Tensors">1.2</a>) is represented using the following <span class="math inline">$$3 \times 3 \times 3$$</span> matrix and is shown in Fig.  <a href="#fig:StressTensors" reference-type="ref" reference="fig:StressTensors">2</a>:;
<div class="figure">
<p class="caption">Stress tensors for (a.) Cartesian, (b.) cylindrical, and (c.) spherical coordinate systems. From Nix and Cai.;
</div>
;<span class="math display">$\sigma_{ij} \begin{bmatrix} \sigma_{xx} &amp; \sigma_{xy} &amp; \sigma_{xz}\\ \sigma_{yx} &amp; \sigma_{yy} &amp; \sigma_{yz}\\ \sigma_{zx} &amp; \sigma_{zy} &amp; \sigma_{zz}\\ \end{bmatrix} \label{eq:CartesianStressTensor}$</span>;
;<strong>Cylindrical</strong> coordinates are also an orthogonal coordinate system defined in Fig. <a href="#fig:StressTensors" reference-type="ref" reference="fig:StressTensors">2</a>(b). The stress tensor in this coordinate system is defined by the cylinderical components <span class="math inline">$$r$$</span>, <span class="math inline">$$\theta$$</span>, and <span class="math inline">$$z$$</span>. Here, <span class="math inline">$$r$$</span> is the distance from the <span class="math inline">$$z$$</span>-axis to the point. <span class="math inline">$$\theta$$</span> is the angle between the reference direction (we use the <span class="math inline">$$x$$</span>-direction) and the vector that points from the origin to the coordinates projected onto the <span class="math inline">$$xy$$</span> plane. <span class="math inline">$$z$$</span> is the distance from the point’s coordinates projected onto <span class="math inline">$$xy$$</span> plane and the point itself. The stress tensor is represented as;
;<span class="math display">$\sigma_{ij}= \begin{bmatrix} \sigma_{rr} &amp; \sigma_{r \theta} &amp; \sigma_{r z}\\ \sigma_{\theta r} &amp; \sigma_{\theta\theta} &amp; \sigma_{\theta z}\\ \sigma_{z r} &amp; \sigma_{z \theta} &amp; \sigma_{zz}\\ \end{bmatrix} \label{eq:CylindricalStressTensor}$</span>;
;<strong>Spherical</strong> coordinates are defined by <span class="math inline">$$r$$</span>, <span class="math inline">$$\theta$$</span> and <span class="math inline">$$\phi$$</span>. Here <span class="math inline">$$r$$</span> is the radial distance from the origin to the point. <span class="math inline">$$\theta$$</span> is the polar angle, or the angle between the <span class="math inline">$$x$$</span>-axis and the point, projected onto the <span class="math inline">$$xy$$</span> plane. <span class="math inline">$$\phi$$</span> is the azimuthal angle, or the angle between the <span class="math inline">$$z$$</span>-axis and the vector pointing from the origin to the point. The stress tensor is;
;<span class="math display">$\sigma_{ij}= \begin{bmatrix} \sigma_{rr} &amp; \sigma_{r \theta} &amp; \sigma_{r \phi}\\ \sigma_{\theta r} &amp; \sigma_{\theta\theta} &amp; \sigma_{\theta \phi}\\ \sigma_{\phi r} &amp; \sigma_{\phi \theta} &amp; \sigma_{\phi\phi}\\ \end{bmatrix} \label{eq:SphericalStressTensor}$</span>;
;We will often want to transform tensor values from one coordinate system to another in <span class="math inline">$${\rm I\!R}^3$$</span>. As an example, we will convert the stress state from a cylinderical coordinate system to a Cartesian coordinate system. This transformation from stress state in the original coordinate system (<span class="math inline">$$\sigma_{kl } = \sigma_{kl}^{r \theta z}$$</span>) to the new coordinate system (<span class="math inline">$$\sigma_{ij }^{&#39;} = \sigma_{ij}^{xyz}$$</span>) is performed using the following relationship:;
;<span class="math display">$\sigma_{ij}&#39; = Q_{ik}Q_{jk}\sigma_{kl} \label{eq:GeneralTransform}$</span>;
;Where the summation notation (Sec. <a href="#sec:SummationNotation" reference-type="ref" reference="sec:SummationNotation">1.3</a>) is implicit. In our example the indices <span class="math inline">$$kl$$</span> indicate the original cylindrical coordinate system (<span class="math inline">$$r$$</span>, <span class="math inline">$$\theta$$</span>, <span class="math inline">$$z$$</span>) and the indices <span class="math inline">$$ij$$</span> indicate the new coordinate system (<span class="math inline">$$x$$</span>, <span class="math inline">$$y$$</span>, <span class="math inline">$$z$$</span>).;
;Note that Eq. <a href="#eq:GeneralTransform" reference-type="ref" reference="eq:GeneralTransform"><span class="math display">$eq:GeneralTransform$</span></a> can be written in matrix form as:;
;<span class="math display">$\sigma = Q \cdot \sigma \cdot Q^{T}$</span>;
;The <span class="math inline">$$Q$$</span> matrix is defined the dot products between the unit vectors in the coordinate systems of interest. In simplified 2D transformation from polar to Cartesian, there is no <span class="math inline">$$z$$</span> component in either coordinate system and terms with those indices can be dropped.;
;<span class="math display">$Q_{ik} \equiv (\hat{e}_{i}^{xy} \cdot \hat{e}_k^{r \theta}) = \begin{bmatrix} (\hat{e}_{x} \cdot \hat{e}_{r}) &amp; (\hat{e}_{x} \cdot \hat{e}_{\theta})\\ (\hat{e}_{y} \cdot \hat{e}_{r}) &amp; (\hat{e}_{y} \cdot \hat{e}_{\theta})\\ \end{bmatrix}\\ %Q_{jl} \equiv (\hat{e}_{j}^{xyz} \cdot \hat{e}_l^{r \theta z}) \\$</span>;
;where <span class="math inline">$$\hat{e}_{r}$$</span> and <span class="math inline">$$\hat{e}_{\theta}$$</span> is related geometrically to <span class="math inline">$$\hat{e}_{x}$$</span> and <span class="math inline">$$\hat{e}_{y}$$</span>:;
;<span class="math display">$\begin{bmatrix} \hat{e}_{r} = \hat{e}_{x} \cos(\theta) + \hat{e}_{y} \sin(\theta)\\ \hat{e}_{\theta} = -\hat{e}_{x} \sin(\theta) + \hat{e}_{y} \cos(\theta)\\ \end{bmatrix}\\ %Q_{jl} \equiv (\hat{e}_{j}^{xyz} \cdot \hat{e}_l^{r \theta z}) \\$</span>;
;And therefore:;
;<span class="math display">\begin{aligned} Q_{ik} &amp;\equiv (\hat{e}_{i}^{xy} \cdot \hat{e}_k^{r \theta}) = \begin{bmatrix} (\hat{e}_{x} \cdot \hat{e}_{r}) &amp; (\hat{e}_{x} \cdot \hat{e}_{\theta})\\ (\hat{e}_{y} \cdot \hat{e}_{r}) &amp; (\hat{e}_{y} \cdot \hat{e}_{\theta})\\ \end{bmatrix} = \begin{bmatrix} Q_{xr} &amp; Q_{x\theta}\\ Q_{yr} &amp; Q_{y\theta}\\ \end{bmatrix} \\ &amp;= \begin{bmatrix} \left(\hat{e}_{x} \cdot \left[\hat{e}_{x} \cos(\theta) + \hat{e}_{y} \sin(\theta)\right]\right) &amp; \left(\hat{e}_{x} \cdot \left[-\hat{e}_{x} \sin(\theta) + \hat{e}_{y} \cos(\theta)\right]\right)\\ \left(\hat{e}_{y} \cdot \left[\hat{e}_{x} \cos(\theta) + \hat{e}_{y} \sin(\theta)\right]\right) &amp; \left(\hat{e}_{y} \cdot \left[-\hat{e}_{x} \sin(\theta) + \hat{e}_{y} \cos(\theta)\right]\right) \end{bmatrix}\\ &amp;= \begin{bmatrix} \cos(\theta) &amp; -\sin(\theta)\\ \sin(\theta) &amp; \cos(\theta) \end{bmatrix}\end{aligned}</span>;
;So, to convert the stress tensor in polar coordinates (<span class="math inline">$$\sigma_{kl}^{r\theta}$$</span>) to Cartesian (<span class="math inline">$$\sigma_{ij}^{xy}$$</span>), we take the triple dot-product:;
;<span class="math display">\begin{aligned} \sigma&#39; &amp;= Q \cdot \sigma \cdot Q^{T} = \begin{bmatrix} \sigma_{xx} &amp; \sigma_{xy}\\ \sigma_{yx} &amp; \sigma_{yy} \end{bmatrix}= \begin{bmatrix} \cos(\theta) &amp; -\sin(\theta)\\ \sin(\theta) &amp; con(\theta) \end{bmatrix}\cdot \begin{bmatrix} \sigma_{rr} &amp; \sigma_{r\theta}\\ \sigma_{\theta r} &amp; \sigma_{\theta \theta} \end{bmatrix} \cdot \begin{bmatrix} \cos(\theta) &amp; \sin(\theta)\\ -\sin(\theta) &amp; con(\theta) \end{bmatrix} \end{aligned}</span>;
;Completing the math yields:;
;<span class="math display">\begin{aligned} \sigma_{xx} &amp;= \cos(\theta) \left[\sigma_{rr} \cos(\theta) - \sigma_{\theta r} \sin( \theta)\right] - \sin(\theta)\left[\sigma_{r\theta}\cos(\theta) - \sigma_{\theta\theta}\sin(\theta)\right]\\ \sigma_{xy} &amp;= \sin(\theta) \left[\sigma_{rr} \cos(\theta) - \sigma_{\theta r} \sin( \theta)\right] + \cos(\theta)\left[\sigma_{r\theta}\cos(\theta) - \sigma_{\theta\theta}\sin(\theta)\right]\\ \sigma_{yx} &amp;= \cos(\theta) \left[\sigma_{\theta r} \cos(\theta) + \sigma_{rr} \sin( \theta)\right] - \sin(\theta)\left[\sigma_{\theta \theta}\cos(\theta) + \sigma_{r\theta}\sin(\theta)\right]\\ \sigma_{yy} &amp;= \sin(\theta) \left[\sigma_{\theta r} \cos(\theta) + \sigma_{rr} \sin( \theta)\right] + \cos(\theta)\left[\sigma_{\theta \theta}\cos(\theta) + \sigma_{r\theta}\sin(\theta)\right]\end{aligned}</span>;
;In as system with only one or two stress components these coordinate transformations simplify greatly. Remember, though, in <span class="math inline">$${\rm I\!R}^3$$</span> there will be <span class="math inline">$$N = 3^2$$</span> components due to increased dimentionality.;
</div>
<div id="calculus" class="section level2">
<h2>Calculus</h2>
;We assume that incoming graduate students have completed coursework in calculus including the basic calculation of derivatives, antiderivatives, definite integrals, series/sequences, and multivariate calculus. Below are outlined some more advanced calculus concepts that have specific physical relevance to concepts covered in the MSE core.;
;Any college-level calculus text is suitable for supplemental study. The sections below on Total Differentials (Sec. <a href="#subsec:totdiff" reference-type="ref" reference="subsec:totdiff">1.5.1</a>) and Exact/Inexact Differentials (Sec. <a href="#subsec:eidiff" reference-type="ref" reference="subsec:eidiff">1.5.2</a>) were adapted from the course materials of Richard Fitzpatrick at UT-Austin (available <a href="http://farside.ph.utexas.edu/teaching/sm1/Thermal.pdf">here</a>).;
<div id="subsec:totdiff" class="section level3">
<h3>Total Differentials: (Release 11/2016)</h3>
;<strong><em>Encountered in: MAT<code>_</code>SCI 401</em></strong>;
;When there exists an explicit function of several variables such as <span class="math inline">$$f = f(x,y,t)$$</span>, which has <span class="math inline">$$f$$</span> has a <em>total</em> differential of form:;
;<span class="math display">\begin{aligned} \Diff{}{f} = \Big(\Partial{}{f}{t}\Big)_{x,y}\Diff{}{t} + \Big(\Partial{}{f}{x}\Big)_{t,y} \Diff{}{x} + \Big(\Partial{}{f}{y}\Big)_{t,x} \Diff{}{y} \end{aligned}</span>;
;Here, we do not assume that <span class="math inline">$$f$$</span> is constant with respect any of the arguments <span class="math inline">$$(x\text{,}\, y\text{, or } t)$$</span>. This equation defines the differential change in the function <span class="math inline">$$\Diff{}{f}$$</span> and accounts for all interdependencies between <span class="math inline">$$x$$</span>, <span class="math inline">$$y$$</span>, and <span class="math inline">$$t$$</span>. In general, the total differential can be defined as:;
;<span class="math display">\begin{aligned} \label{eq:TotDiff} \Diff{}{f} = \sum\limits_{i=1}^n \Big(\Partial{}{f}{x_i}\Big)_{x_{j\neq i}}\Diff{}{x_i}\end{aligned}</span>;
;total differential is important when working with thermodynamic systems which is described by thermodynamic parameters (e.g. <span class="math inline">$$P$$</span>, <span class="math inline">$$T$$</span>, <span class="math inline">$$V$$</span>) which are not necessary independent. For example, the internal energy <span class="math inline">$$U$$</span> for some homogeneous system can be defined in terms of entropy <span class="math inline">$$S$$</span> and volume <span class="math inline">$$V$$</span>; <span class="math inline">$$U = U(S,V)$$</span>. According to Eq. <a href="#eq:TotDiff" reference-type="ref" reference="eq:TotDiff"><span class="math display">$eq:TotDiff$</span></a>, the infinitesimal change in internal entropy is therefore: <span class="math display">\begin{aligned} \Diff{}{U} = \Big(\Partial{}{U}{S}\Big)_{V}\Diff{}{S} + \Big(\Partial{}{U}{V}\Big)_{S} \Diff{}{V}\end{aligned}</span>;
</div>
<div id="subsec:eidiff" class="section level3">
<h3>Exact and Inexact Differentials (Release 11/2016)</h3>
;<strong><em>Encountered in: MAT<code>_</code>SCI 401</em></strong>;
;Suppose we are assessing the infinitesimal change of some value: <span class="math inline">$$\Diff{}{f}$$</span>, in which <span class="math inline">$$\Diff{}{f}$$</span> is a linear differential of the form: <span class="math display">\begin{aligned} \Diff{}{f} = \sum\limits_{i=1}^m M_i(x_1,x_2,...x_m)\Diff{}{x_i}.\end{aligned}</span> In thermodynamics we are often concerned with linear differentials of two independent variables such that <span class="math display">\begin{aligned} \label{eq:LinearDiff} \Diff{}{f} = M(x,y) \Diff{}{x} + N(x,y) \Diff{}{y}.\end{aligned}</span> An exact differential is one in which <span class="math inline">$$\int{\Diff{}{z}}$$</span> is path-independent. It can be shown (e.g. <a href="http://mathworld.wolfram.com/ExactDifferential.html">Wolfram Exact Differential</a>) that this means:;
;<span class="math display">\begin{aligned} \label{eq:ExactDiff} \Diff{}{f} = \Big(\Partial{}{f}{x}\Big)_{y} \Diff{}{x} + \Big(\Partial{}{f}{y}\Big)_{x} \Diff{}{y}. \end{aligned}</span>;
;Which means that;
;<span class="math display">\begin{aligned} \label{eq:ExactDiff2} \Big(\Partial{}{M}{y}\Big)_{x} = \Big(\Partial{}{N}{x}\Big)_{y}. \end{aligned}</span>;
;An inexact differential is one in which the equality defined in Eq. <a href="#eq:ExactDiff" reference-type="ref" reference="eq:ExactDiff"><span class="math display">$eq:ExactDiff$</span></a> (and therefore Eq. <a href="#eq:ExactDiff2" reference-type="ref" reference="eq:ExactDiff2"><span class="math display">$eq:ExactDiff2$</span></a>) is not necessary true. An inexact differential is typically denoted using <em>bar</em> notation to define the infinitesimal value: <span class="math display">\begin{aligned} \text{\dj} f = \Big(\Partial{}{f}{x}\Big)_{y} \Diff{}{x} + \Big(\Partial{}{f}{y}\Big)_{x} \Diff{}{y}.\end{aligned}</span> Two physical examples make this more clear:;
<div class="displayquote">
;<strong>Example 1:</strong> Imagine you are speaking with a classmate who recently traveled from from Chicago to Minneapolis. You know he is now in Minneapolis. Is it possible for you to know how much money he spent gas (<span class="math inline">$$G$$</span>)? No, you can’t. <span class="math inline">$$G$$</span> is dependent on <em>how</em> your friend traveled to Minneapolis: his gas mileage, the cost of gas, and, of course, the route he took. <span class="math inline">$$G$$</span> cannot be known without understanding the details of the path, and is therefore not path independent. The differenitial of <span class="math inline">$$G$$</span> is therefore <em>inexact</em>: <span class="math inline">$$G$$</span>.;
;Now, what do we know about your friend’s distance, <span class="math inline">$$D$$</span>, to Chicago? This value does not dependent on how he traveled, the only information you need to know is his location, now, in Minneapolis. His distance to Chicago, therefore is a state variable and <span class="math inline">$$\Diff{}{D}$$</span> is an <em>exact</em> differential.;
</div>
<div class="displayquote">
;<strong>Example 2:</strong> Let’s reconsider a situation like that of Example 1 this within the purview of thermodynamics. Consider the internal energy <span class="math inline">$$U$$</span> of a closed system. To achieve an infinitesimal change in energy <span class="math inline">$$\Diff{}{U}$$</span>, we provided a bit of work <span class="math inline">$$\text{\dj}W$$</span> or heat <span class="math inline">$$\text{\dj}Q$$</span>: <span class="math inline">$$\Diff{}{U} = \text{\dj}W + \text{\dj}Q$$</span> <a href="#fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a>. The work performed and heat exchanged on the system is path-dependent — the total work done depends on <em>how</em> the work was performed or heat exchanged, and so <span class="math inline">$$\text{\dj}W$$</span> and <span class="math inline">$$\text{\dj}Q$$</span> are inexact.;
</div>
;It is sometimes useful to ask yourself about the nature of a variable to ascertain whether the differential is exact or inexact. That is, it makes sense to ask yourself: “what is the energy of the system?” or “what is the pressure of the system”? This often helps in the identification of a state variable. However, it does not make sense to ask yourself “what is the work of the system” or “what is the heat” of the system — these values depend on the process. Instead, you have to ask yourself: “what is the work done on the system along this path?” or “what is the heat exchanged during this process?”.;
;Finally, there are different properties we encounter during the evaluation exact differential (such as the linear differential in Eq. <a href="#eq:LinearDiff" reference-type="ref" reference="eq:LinearDiff"><span class="math display">$eq:LinearDiff$</span></a>), and inexact differentials (written as <span class="math inline">$$\text{\dj}f = M&#39;(x,y) \Diff{}{x} + N&#39;(x,y) \Diff{}{y}$$</span>). The integral of an exact differential over a closed path is necessary zero: <span class="math display">\begin{aligned} \oint\Diff{}{f} \equiv 0,\end{aligned}</span> while the integral of an inexact differential over a closed path is not <em>necessarily</em> zero: <span class="math display">\begin{aligned} \oint\text{\dj}f\underset{n}{\neq} 0.\end{aligned}</span> where <span class="math inline">$$\Big(\underset{n}{\neq}\Big)$$</span> symbolizes “not necessarily equal to”. Indeed, when evaluating the inexact differential, it is important to consider the path. For example, the work performed a system going from a macrostate <span class="math inline">$$s_i$$</span> to a macrostate <span class="math inline">$$s_2$$</span> is defined by path <span class="math inline">$$L_{1}$$</span>, then the total work performed is defined: <span class="math display">\begin{aligned} W_{L_{1}} = \int\limits_{L_{1}} \text{\dj}W\end{aligned}</span> If we took a different path, <span class="math inline">$$L_{2}$$</span>, the total work performed by be different and <span class="math display">\begin{aligned} W_{L_{1}} \underset{n}{\neq} W_{L_{2}}\end{aligned}</span> A good illustration of a line integral over a scalar field is shown in the multimedia Fig. <a href="#fig:LineIntegral" reference-type="ref" reference="fig:LineIntegral"><span class="math display">$fig:LineIntegral$</span></a>. It is clear that, depending on the path, the evaluated integral will have different values.;
</div>
<div id="vector-calculus-release-tbd" class="section level3">
<h3>Vector Calculus (Release TBD)</h3>
;<strong><em>Encountered in: MAT<code>_</code>SCI 406, 408</em></strong>;
</div>
</div>
<div id="sec:DiffEQ" class="section level2">
<h2>Differential Equations</h2>
;Differential equations — equations that relate functions with their derivatives — are central to the description of natural phenomena in physics, chemistry, biology and engineering. In the sections below, we will outline basic classification of differential equations and describe methods and techniques used in solving equations that are encountered in the MSE graduate core.;
;The information provide below is distilled and specific to the MSE core, but is by <em>no means</em> a equivalent to a thorough 1- or 2-quarter course in ODEs and PDEs. For students who are completely unfamiliar with the material below; i.e., those who have not taken a course in differential equations, we highly recommend enrollment in Applied Math 311-1 and 311-2 <a href="#fn2" class="footnote-ref" id="fnref2"><sup>2</sup></a>.;
<div id="classification-of-differential-equations-release-112016" class="section level3">
<h3>Classification of Differential Equations (Release: 11/2016)</h3>
;<strong><em>Encountered in: MAT<code>_</code>SCI 405, 406, 408</em></strong>;
;Classification of differential equations provide intuition about the physical process that the equation describes, as well as providing context we use as we go about solving the equation. A differential equation can be classified as either ordinary or partial, linear or non-linear, and by its homogeneity and equation order. These are described briefly below, with examples.;
<div id="ordinary-and-partial-differential-equations" class="section level4">
<h4>Ordinary and Partial Differential Equations —</h4>
;The primary classification we use to organize types of differential equations is whether they are <em>ordinary</em> or <em>partial</em> differential equations. <em>Ordinary differential equations</em> (ODEs) involve functions of a single variable. All derivatives present in the ODE are relative to that one variable. Partial differential equations are functions of more than one variable and the partial derivatives of these functions are taken with respect to those variables.;
;An example of an ODE is shown in Eq. <a href="#eq:RLC" reference-type="ref" reference="eq:RLC"><span class="math display">$eq:RLC$</span></a>. This equation has two functions <span class="math inline">$$q(t)$$</span> (charge) and <span class="math inline">$$V(t)$$</span> (voltage), the values of which depend on time <span class="math inline">$$t$$</span>. All of the derivatives are with respect the independent variable <span class="math inline">$$t$$</span>. <span class="math inline">$$L$$</span>, <span class="math inline">$$R$$</span>, and <span class="math inline">$$C$$</span> are constants. <span class="math display">\begin{aligned} L \FullDiff{2}{q(t)}{t} + R \FullDiff{}{q(t)}{t} + \frac{1}{C} q(t) = V(t) \label{eq:RLC}\end{aligned}</span> This general example describes the flow of charge as a function of time in a <a href="https://en.wikipedia.org/wiki/RLC_circuit">RLC circuit</a> with an applied voltage that changes with time. Other examples of ODEs you may encounter in the MSE core include ODEs for grain growth as a function of time and the equations of motion.;
;<em>Partial differential equations</em> (PDEs) contain multivariable functions and their partial derivatives i.e., a derivative with respect to one variable with others held constant. As physical phenomenon often vary in both space and time, PDEs — and methods of solving them — will be encountered in many of the core MSE courses. These phenomena include wave behavior, diffusion, the Schödinger equation, heat conduction, the Cahn-Hilliard equation, and many others. A typical example of a PDEs you will encounter is Fick’s Second Law. In 1D, this is: <span class="math display">\begin{aligned} \Partial{}{\varphi(x,t)}{t} = D\Partial{2}{\varphi(x,t)}{x} \label{eq:Ficks2}\end{aligned}</span> where <span class="math inline">$$\varphi$$</span> is the concentration as a function of position <span class="math inline">$$x$$</span> and time <span class="math inline">$$t$$</span>. This expression equates the change in the concentration over time to the shape (concavity) of the concentration profile. Partial differential equations are, by nature, often more difficult to solve than ODEs, but, as with ODEs, there exist simple, analytic, and systematic methods for solving many of these equations.;
</div>
<div id="equation-order" class="section level4">
<h4>Equation Order —</h4>
;The <em>order</em> of a differential equation is simply the order of the highest derivative that is present in the equation. In the preceding section, Eq. <a href="#eq:RLC" reference-type="ref" reference="eq:RLC"><span class="math display">$eq:RLC$</span></a> is a second-order equation. Eq. <a href="#eq:Ficks2" reference-type="ref" reference="eq:Ficks2"><span class="math display">$eq:Ficks2$</span></a> is also a second-order equation. Students in the MSE core will encounter 4<sup>th</sup>-order equations such as the Cahn-Hilliard equation, which describes phase separation and is discussed in detail in MAT<code>_</code>SCI 408. One note concerning notation — when writing higher-order differential equations it is common to abandon Leibniz’s notation (where an <span class="math inline">$$n^{\text{th}}$$</span>-order derivative is denoted as <span class="math inline">$$\FullDiff{n}{f}{x}$$</span>) in favor of Lagrange’s notation in which the following representations are equivalent: <span class="math display">\begin{aligned} \text{Leibniz}:&amp; F\big[x,f(x),\FullDiff{}{f(t)}{x},\FullDiff{2}{f(t)}{x}...\FullDiff{n}{f(t)}{x}\big] = 0 \rightarrow\\ \text{Lagrange}:&amp; F\big[x,f,f\prime,f\prime\prime...f^{(n)}\big] = 0 \label{eq:LagrangeNote}\end{aligned}</span> An example would be be the 3<sup>rd</sup>-order differential equation: <span class="math display">\begin{aligned} f\prime\prime\prime + 3f\prime + f\exp{x} = x\end{aligned}</span>;
</div>
<div id="linearity" class="section level4">
<h4>Linearity —</h4>
;While considering how to solve a differential equation, it is crucial to consider whether an equation is linear or non-linear. For example, an ODE like that represented in Eq. <a href="#eq:LagrangeNote" reference-type="ref" reference="eq:LagrangeNote"><span class="math display">$eq:LagrangeNote$</span></a> is linear if the <span class="math inline">$$F$$</span> is a linear function of the variables <span class="math inline">$$f, f&#39;, f\prime\prime...f^{(n)}$$</span>. This definition also applies to PDEs. The expression for the general linear ODE of order <span class="math inline">$$n$$</span> is: <span class="math display">\begin{aligned} a_0(x)f^{(n)}+a_1(x)f^{(n-1)} + ... + a_n(x)f = g(t) \label{eq:LinearODE}\end{aligned}</span> Any expression that is not of this form is considered <em>nonlinear</em>. The presence of a product such as <span class="math inline">$$f\cdot f\prime$$</span>, a power such as <span class="math inline">$$(f\prime)^2$$</span>, or a sinusoidal function of <span class="math inline">$$f$$</span> would make the equation nonlinear.;
;The methods of solving linear differential equations are well-developed. Nonlinear differential equations, on the other hand, often require more complex analysis. As you will see, methods of <em>linearization</em> (small-angle approximations, stability theory) as well as numerical techniques are powerful ways to approach these problems.;
</div>
<div id="homogeneity" class="section level4">
<h4>Homogeneity —</h4>
;Homogeneity of a linear differential equation, such as that shown in Eq. <a href="#eq:LinearODE" reference-type="ref" reference="eq:LinearODE"><span class="math display">$eq:LinearODE$</span></a> is satisfied if <span class="math inline">$$g(x) = 0$$</span>. This property of a differential equation is often connected to the <em>driving force</em> in a system. For example, the motion of a damped harmonic oscillator in 1D (derived from Newton’s laws of motion, <a href="https://en.wikipedia.org/wiki/Harmonic_oscillator">here</a>) is described by a homogeneous linear, 2<sup>nd</sup>-order ODE: <span class="math display">\begin{aligned} x\prime\prime+2\zeta \omega_0 x\prime \omega_0^2 x = 0\end{aligned}</span> where <span class="math inline">$$x = x(t)$$</span> is position as a function of time (<span class="math inline">$$t$$</span>) , <span class="math inline">$$\omega_0$$</span> is the undamped angular frequency of the oscillator, and <span class="math inline">$$\zeta$$</span> is the damping ratio. If we add a sinusoidal driving force, however, the equation becomes inhomogeneous: <span class="math display">\begin{aligned} x\prime\prime+2\zeta \omega_0 x\prime + \omega_0^2 x = \frac{1}{m} F_0 \sin{(\omega t)} \label{eq:DDSOscillator}\end{aligned}</span> One may notice that the form for Eq. <a href="#eq:DDSOscillator" reference-type="ref" reference="eq:DDSOscillator"><span class="math display">$eq:DDSOscillator$</span></a> is exactly that of the first equation shown in this section (Eq. <a href="#eq:RLC" reference-type="ref" reference="eq:RLC"><span class="math display">$eq:RLC$</span></a>) — the ODE for a damped, driven harmonic oscillator is exactly the same form as that of the RLC circuit operating under a alternating driving voltage.;
</div>
<div id="boundary-conditions" class="section level4">
<h4>Boundary Conditions —</h4>
;Differential equations, when combined with a set of well-posed constraints, or boundary conditions, define a <em>boundary value problem</em>. Well-posed boundary value problems have unique solutions from the imposed physical constraints on the system of interest. This analysis allows for the extraction of relevant physical information investigating a physical system — the elemental composition at some position at time within a diffusion couple, the equilibrium displacement in a mechanically deformed body, or the energy eigenstate of a quantum system. While boundary conditions are not used not classify a differential equation itself, boundary conditions are used to classify the entire boundary value problem — which is defined by both the differential equation and the boundary value conditions.;
;Boundary value problems are at the heart of physical description in science and engineering. Solving these types of problems allow for the extraction of information (concentration, deformation, stress state, quantum state, etc.) from a system. There are a few types of boundary conditions that you may encounter in the MSE core:;
<ol style="list-style-type: decimal">
<li>;A <em>Dirichlet</em> (or first-type) boundary condition is one in which specific values are fixed on the boundary of a domain. An example of this is a system in which we have diffusion of carbon (in, for example, a carbourizing atmosphere) into iron (possessing a volume defined as domain <span class="math inline">$$\Omega$$</span>) where the carbon concentration <span class="math inline">$$C(\mathbf{r},t)$$</span> at the interface is known for all time <span class="math inline">$$t &gt; 0$$</span>. Here, <span class="math inline">$$\mathbf{r}$$</span> is position vector and the domain boundary is denoted as <span class="math inline">$$\partial \Omega$$</span>). If this concentration is a known function, <span class="math inline">$$f(\textbf{r},t)$$</span>, then the Dirichlet condition is described as: <span class="math display">$C(\textbf{r},t) = f(\textbf{r},t), \quad \forall\textbf{r} \in \partial \Omega$</span>;</li>
<li>;A <em>Neumann</em> (or second-type) boundary condition is the values of the normal derivative (a directional derivative with respect to the normal of a surface or boundary represented by the vector <span class="math inline">$$\mathbf{n}$$</span>) of the solution are known at the domain boundary. Continuing with our example above, this would mean we know the diffusion flux normal to the the boundary at <span class="math inline">$$r$$</span> at all times <span class="math inline">$$t$$</span>: <span class="math display">$\Partial{}{C(\textbf{r},t)}{\mathbf{n}} = g(\textbf{r},t), \quad \forall\textbf{r} \in \partial \Omega$</span> where <span class="math inline">$$g(\textbf{r},t)$$</span> is a known function, and the bold typesetting denotes a vector.;</li>
<li>;Two other types of boundary conditions you may encounter are Cauchy and Robin. Cauchy boundary conditions specifies both the solution value and its normal derivative at the boundary — i.e., it provides both Dirichlet and Neumann conditions. The Robin condition provides a <em>linear combination</em> of the solution and its normal derivative and is common in convection-diffusion equations.;</li>
<li>;Periodic boundary conditions are applied in periodic media or large, ordered systems. Previously described boundary conditions can therefore combined into periodic sets using infinite sums of sine and cosine functions to create <em>Fourier series</em>. This will be discussed in more detail in Sec. <a href="#sec:FourierMethods" reference-type="ref" reference="sec:FourierMethods">1.6.2.4</a>.;</li>
</ol>
</div>
</div>
<div id="solving-differential-equations-release-112016" class="section level3">
<h3>Solving Differential Equations (Release: 11/2016)</h3>
;There are many ways to solve differential equations, including analytical and computational techniques. Below, we outline a number of methods that are used in the MSE core to solve relevant differential equations.;
<div id="separation-of-variables" class="section level4">
<h4>Separation of Variables</h4>
;, also known as the <em>Fourier Method</em>, is a general method used in both ODEs and PDEs to reconstruct a differential equation so that the two variables are separated to opposite sides of the equation and then solved using techniques covered in an ODE class. <span id="sec:SepVar" label="sec:SepVar"><span class="math display">$sec:SepVar$</span></span>;
;This method will be used in the solving of many simpler differential equations such as the heat and diffusion equations. These equations must be linear and homogeneous for separation of variables to work. The main goal is to take some sort of differential equation, for example an ordinary differential equation: <span class="math display">\begin{aligned} \FullDiff{}{y}{x} &amp;= g(x)h(y)\\ \intertext{which we can rearrange as:} \frac{1}{h(y)}\mathop{dy} &amp;= g(x)dx\\ \intertext{We now integrate both sides of the equation to find the solution:} \int{\frac{1}{h(y)}\mathop{dy}} &amp;= \int{g(x)\mathop{dx}}\end{aligned}</span> Clearly, we have separated our two variables, <span class="math inline">$$x$$</span> and <span class="math inline">$$y$$</span>, to opposite sides of the equation. If the functions are integrable and the resulting integration can be solved for <span class="math inline">$$y$$</span>, then a solution can be obtained.;
;Note here that we have treated the <span class="math inline">$$\mathop{dy}/\mathop{dx}$$</span> derivative as a fraction which we have separated.;
<div class="displayquote">
;<strong>Example 1:</strong> Exponential growth behavior can be represented by the equation: <span class="math display">\begin{aligned} \FullDiff{}{y(t)}{t} &amp;= k y(t)\\ \intertext{or} \FullDiff{}{y}{t} &amp;= k y\\ \intertext{This expression simply states that the growth rate of some quantity y at time, t, is proportional to the value of y itself at that time. This is a seperable equation:} \frac{1}{y}dy &amp;= k dt\\ \intertext{We can integrated both sides to get:} \int{\frac{1}{y}dy} &amp;= k \int{dt}\\ \text{ln}(y)+C_1 &amp;= k t + C_2\\ \intertext{where C_1 and C_2 are the constants of integration. These can be combined:} \text{ln}(y) &amp;= kt+\tilde{C}\\ y &amp;= e^{(kt+\tilde{C})}\\ y &amp;= Ce^{kt} \end{aligned}</span> This is clear exponential growth behavior as a function of time. Separation of variables is extremely useful in solving various ODEs and PDEs — it is employed in the solving of the diffusion equation in .;
</div>
</div>
<div id="sec:Sturm-Liouville" class="section level4">
<h4>Sturm-Liouville Boundary Value Problems</h4>
;In this section, we use Sturm-Liouville theory in solving a separable, linear, second-order homogeneous partial differential equation. Sturm-Liouville theory can be used on differential equations (here, in 1D) of the form: <span class="math display">\begin{aligned} \FullDiff{}{}{x}\Big[p(x)\FullDiff{}{y}{x}\Big]-q(x)y+\lambda r(x)y = 0 \label{eq:SturmLiouville} \intertext{or} \big[p(x)y\prime]\prime-q(x)y+\lambda r(x)y = 0 \label{eq:SturmLiouville-2}\end{aligned}</span> This type of problem requires knowledge of many use of many concepts and techniques in solving ODEs, including , Fundamental Solutions of Linear First- and Second-Order Homogeneous Equations, Fourier Series, and Orthogonal Solution Functions. It is important to note that the approach described below (adapted from JJ Hoyt’s <em>Phase Transformations</em>), which employs separation of variables and Fourier transforms, works only on linear equations. A different approach must be taken for non-linear equations (such as Cahn-Hilliard).;
;We will use the example of a solid slab of material of length <span class="math inline">$$L$$</span> that has a constant concentration of some elemental species at time zero <span class="math inline">$$\varphi(x,0) = \varphi_0$$</span> for all <span class="math inline">$$x$$</span> within the slab. On either end of the slab we have homogeneous boundary conditions defining the surface concentrations fixed at <span class="math inline">$$\varphi(0,t) = \varphi(L,t) = 0$$</span> for all <span class="math inline">$$t$$</span>. The changing concentration profile, <span class="math inline">$$\varphi(x,t)$$</span> is dictated by Fick’s second law, as described earlier in Eq. <a href="#eq:Ficks2" reference-type="ref" reference="eq:Ficks2"><span class="math display">$eq:Ficks2$</span></a>:;
;<span class="math display">$\Partial{}{\varphi(x,t)}{t} = D\Partial{2}{\varphi(x,t)}{x} \label{eq:Ficks2-1}$</span>;
;To use separation of variables, we define the concentration <span class="math inline">$$\varphi(x,t)$$</span>, which is dependent on both position and time, to be a product of two functions, <span class="math inline">$$T(t)$$</span> and <span class="math inline">$$X(x)$$</span>:;
;<span class="math display">\begin{aligned} \varphi(x,t) &amp;= T(t)X(x) \label{eq:SepVar-1} \intertext{or, in shorthand,} \varphi &amp;= TX\end{aligned}</span>;
;It isn’t clear why we do this at this point, but stay tuned. Combining Eqs. <a href="#eq:Ficks2-1" reference-type="ref" reference="eq:Ficks2-1"><span class="math display">$eq:Ficks2-1$</span></a> and <a href="#eq:SepVar-1" reference-type="ref" reference="eq:SepVar-1"><span class="math display">$eq:SepVar-1$</span></a> yields:;
;<span class="math display">$XT\prime = DTX\prime\prime$</span>;
;Where the primed Lagrange notation denotes total derivatives. <span class="math inline">$$T$$</span> and <span class="math inline">$$X$$</span> are functions only of <span class="math inline">$$t$$</span> and <span class="math inline">$$x$$</span>, respectively. Now, we separate the variables completely to acquire:;
;<span class="math display">$\frac{1}{DT}T\prime = \frac{1}{X}X\prime\prime$</span>;
;This representation conveys something critical: each side of the equation must be equal to <em>the same</em> constant. This is because the two sides of the equation are equal to each other and the only way a collection of time-dependent quantities can be equivilent to a selection of position-dependent quantities is for them to be constant with respect to both time and position. We select this constant — for reasons that become clear of the convience of this selection later in the analysis — as <span class="math inline">$$-\lambda^2$$</span>:;
<div class="subequations">
;<span class="math display">\begin{aligned} \frac{1}{DT}T\prime &amp;= -\lambda^2 \label{eq:SepT}\\ \frac{1}{X}X\prime\prime &amp;= -\lambda^2 \label{eq:SepX} \end{aligned}</span>;
</div>
;Integration of Eq. <a href="#eq:SepT" reference-type="ref" reference="eq:SepT"><span class="math display">$eq:SepT$</span></a> yields, from : <span class="math display">\begin{aligned} \frac{1}{DT}T\prime &amp;= -\lambda^2 \nonumber\\ \frac{1}{T}\FullDiff{}{T}{t} &amp;= -\lambda^2 D \nonumber\\ \int \frac{1}{T}\Diff{}{T} &amp;= -\int \lambda^2 D \Diff{}{t} \nonumber\\ \ln{T} &amp;= -\lambda^2 D t + T_0 \nonumber\\ \intertext{where T_0 is the combined constant of integration:} T = T(t) &amp;= \exp{(-\lambda^2 D t + T_0)} \nonumber\\ T(t) &amp;= T_0 \exp{(-\lambda^2 D t)} \label{eq:Tt}\end{aligned}</span> Eq. <a href="#eq:SepX" reference-type="ref" reference="eq:SepX"><span class="math display">$eq:SepX$</span></a>, on the other hand, is a linear, homogeneous, second-order ODE with constant coefficients that describes simple harmonic behavior. We can solve this by assessing its <a href="https://en.wikipedia.org/wiki/Characteristic_equation_(calculus)">characteristic equation</a>: <span class="math display">\begin{aligned} r^2+\lambda^2 = 0\\ \intertext{which has roots:} r = \pm \lambda i\end{aligned}</span> When the roots of the characteristic equation are of the form <span class="math inline">$$r = \alpha \pm \beta i$$</span>, the <a href="http://www.stewartcalculus.com/data/CALCULUS%20Concepts%20and%20Contexts/upfiles/3c3-2ndOrderLinearEqns_Stu.pdf">solution of the differential equation (Pg. 5)</a> is: <span class="math display">$y = e^{\alpha x}(c_1 \cos{\beta x} + c_2 \sin{\beta x})$</span>;
;In this instance, <span class="math inline">$$\alpha = 0$$</span> and <span class="math inline">$$\beta = \lambda$$</span>, so our solution is:;
;<span class="math display">$X = X(x) = \tilde{A} \cos{\lambda x} + \tilde{B} \sin{\lambda x} \label{eq:Xx}$</span>;
;<span class="math inline">$$\tilde{A}$$</span> and <span class="math inline">$$\tilde{B}$$</span> are constants that will be further simplified later. Recalling Eq. <a href="#eq:SepVar-1" reference-type="ref" reference="eq:SepVar-1"><span class="math display">$eq:SepVar-1$</span></a> and utilizing our results from Eqs. <a href="#eq:Tt" reference-type="ref" reference="eq:Tt"><span class="math display">$eq:Tt$</span></a> and <a href="#eq:Xx" reference-type="ref" reference="eq:Xx"><span class="math display">$eq:Xx$</span></a>, we find: <span class="math display">\begin{aligned} \varphi(x,y) &amp;= X(x)T(x) = T_0 \big[\tilde{A} \cos{\lambda x} + \tilde{B} \cos{\lambda x}\big]\exp{(-\lambda^2 D t)}\\ \intertext{where we now define T_0 \tilde{A} = A and T_0 \tilde{B} = B to get:} \varphi(x,y) &amp;= X(x)T(x) = \big[A\cos{\lambda x} + B\sin{\lambda x}\big]\exp{(-\lambda^2 D t)} \label{eq:DiffSol}\end{aligned}</span> Physially, this solution begins to make sense. At <span class="math inline">$$t=0$$</span> we have a constant concentration, but concentration begins to decay esponentially with time as <span class="math inline">$$D$$</span>, <span class="math inline">$$t$$</span>, and <span class="math inline">$$\lambda$$</span> are all positive, real constants. The concentration profile is a linear combination of sine and cosine functions, which does not yet yield any physical intuition for this system as we have yet to utilize boundary conditions.;
;Recall at this point that we have not specified any value for the constant <span class="math inline">$$\lambda$$</span>, as is typical when solving this type of Sturm-Liouville problem. This suggests that there are possible solutions for all values of <span class="math inline">$$\lambda_n$$</span>. The Principle of Superposition dictates, then, that if Eq. <a href="#eq:DiffSol" reference-type="ref" reference="eq:DiffSol"><span class="math display">$eq:DiffSol$</span></a> is a solution, the complete solution to the problem is a summation of all possible solutions:;
;<span class="math display">\begin{aligned} \Aboxed{\varphi(x,y) = \sum_{n=1}^\infty \big[A_n\cos{\lambda_n x} + B_n\sin{\lambda_n x}\big]\exp{(-\lambda_n^2 D t)}} \label{eq:DiffSolFull}\end{aligned}</span>;
;As the value of <span class="math inline">$$\lambda$$</span> influences the values of <span class="math inline">$$A$$</span> and <span class="math inline">$$B$$</span>, these values must also be calculated for each <span class="math inline">$$\lambda_n$$</span>.;
;Now, to completely solve our well-posed boundary value problem, we utilize our boundary conditions:;
<div class="subequations">
;<span class="math display">\begin{aligned} \varphi(0,t) &amp;= 0\, \quad t \geq 0 \label{eq:Boundx0}\\ \varphi(L,t) &amp;= 0\, \quad t \geq 0 \label{eq:BoundxL}\\ \varphi(x,0) &amp;= \varphi_0\, \quad 0&lt;x&lt;L \label{eq:Time0} \end{aligned}</span>;
</div>
;At <span class="math inline">$$x = 0$$</span>, the sine term in Eq. <a href="#eq:DiffSolFull" reference-type="ref" reference="eq:DiffSolFull"><span class="math display">$eq:DiffSolFull$</span></a> is zero, and therefore the boundary condition in Eq. <a href="#eq:Boundx0" reference-type="ref" reference="eq:Boundx0"><span class="math display">$eq:Boundx0$</span></a> can only be satisfied at all t if <span class="math inline">$$A_n = 0$$</span>. At <span class="math inline">$$x = L$$</span>, <span class="math inline">$$\sin{\lambda_n x}$$</span> must be zero for all values of <span class="math inline">$$\lambda_n$$</span>, therefore <span class="math inline">$$\lambda_n = n\pi/L$$</span>. We need only solve now for <span class="math inline">$$B_n$$</span> using the intial condition, Eq. <a href="#eq:Time0" reference-type="ref" reference="eq:Time0"><span class="math display">$eq:Time0$</span></a>.;
;Using our values of <span class="math inline">$$A_n$$</span> and <span class="math inline">$$\lambda_n$$</span> and assessing Eq. <a href="#eq:DiffSolFull" reference-type="ref" reference="eq:DiffSolFull"><span class="math display">$eq:DiffSolFull$</span></a> at time <span class="math inline">$$t=0$$</span> yields;
;<span class="math display">$\varphi_0 = \sum_{n=1}^\infty B_n \sin{\frac{n \pi x}{L}} \label{eq:Time0-1}$</span>;
;Here, we must recognized the orthogonal property of the sine function, which states that;
;<span class="math display">$\int_0^L \sin{\frac{n \pi x}{L}} \sin{\frac{m \pi x}{L}} \begin{cases} = 0, &amp; \text{if}\ n\neq m \\ \neq 0, &amp; \text{if}\ n = m \end{cases}$</span>;
;You can test this graphically using a plotting program if you like — the integrated value of this product is only non-zero when <span class="math inline">$$n=m$$</span> — or you can follow the proof <a href="http://www.math.umd.edu/~psg/401/ortho.pdf">here</a>. We can multiply both sides of the Eq. <a href="#eq:Time0-1" reference-type="ref" reference="eq:Time0-1"><span class="math display">$eq:Time0-1$</span></a> by <span class="math inline">$$\sin{n \pi x/L}$$</span>, then, and integrate both sides from 0 to <span class="math inline">$$L$$</span>:;
<div class="subequations">
;<span class="math display">\begin{aligned} \varphi_0 \int_0^L\sin{\frac{m \pi x}{L}} &amp;= \int_0^L \sum_{n=1}^\infty \big[B_n \sin{\frac{n \pi x}{L}} \sin{\frac{m \pi x}{L}}\big] \nonumber \intertext{After integration, the only term that survives on the right-hand side is the m=n term, and therefore:} \varphi_0 \int_0^L\sin{\frac{n \pi x}{L}} &amp;= B_n\int_0^L \sin{\frac{n \pi x}{L}}^2 \nonumber\\ \varphi_0 \int_0^L\sin{\frac{n \pi x}{L}} &amp;= \frac{B_n L}{4} \big[2- \frac{\sin{2 n \pi}}{n \pi} \big] \nonumber\\ \intertext{the \sin{2 n \pi} term is always zero:} \varphi_0 \int_0^L\sin{\frac{n \pi x}{L}} &amp;= \frac{B_n L}{2} \nonumber\\ 2 \frac{\varphi_0}{L} \int_0^L\sin{{n \pi x}{L}} &amp;= B_n \nonumber\\ B_n &amp;= 2 \frac{\varphi_0}{L} \big[\frac{L}{n \pi}(1-\cos{n \pi})\big] \nonumber\\ \Aboxed{B_n &amp;= 2 \frac{\varphi_0}{n \pi} (1-\cos{n \pi})} \end{aligned}</span>;
</div>
;For even values of <span class="math inline">$$n$$</span>, the <span class="math inline">$$B_n$$</span> constant is zero. For odd values of <span class="math inline">$$n$$</span>, <span class="math inline">$$B_n = \frac{4 \varphi_0}{n \pi}$$</span>. We utilize the values we acquired for <span class="math inline">$$A_n$$</span>, <span class="math inline">$$B_n$$</span>, and <span class="math inline">$$\lambda$$</span> and plug them into Eq. <a href="#eq:DiffSolFull" reference-type="ref" reference="eq:DiffSolFull"><span class="math display">$eq:DiffSolFull$</span></a>. A change in summation index to account for the <span class="math inline">$$B_n$$</span> values yields:;
;<span class="math display">\begin{aligned} \Aboxed{\varphi(x,t) = \frac{4 c_0}{\pi} \sum_{k=0}^\infty \frac{1}{2k+1} \sin{\frac{(2k+1)\pi x}{L}}\exp{\Big[-\big(\frac{(2k+1)\pi}{L}\big)^2 Dt\Big]}}\end{aligned}</span>;
;This summation converges quickly. We now have the ability to calculate the function <span class="math inline">$$\varphi(x,t)$$</span> at any position <span class="math inline">$$0 &lt; x &lt; L$$</span> and time <span class="math inline">$$t &gt; 0$$</span>!;
</div>
<div id="method-of-integrating-factors" class="section level4">
<h4>Method of Integrating Factors</h4>
;is a technique that is commonly used in the solving of first-order linear ordinary differential equations (but is not restricted to equations of that type). In thermodynamics, it is used to convert a differential equation that is not exact (i.e., path-dependent, See Sec. <a href="#subsec:eidiff" reference-type="ref" reference="subsec:eidiff">1.5.2</a>) to an exact equation, such as in the derivation of entropy as an exact differential (Release TBD).;
</div>
<div id="sec:FourierMethods" class="section level4">
<h4>Fourier Integral Transforms</h4>
;This section will introduce an extremely powerful technique in solving differential equations: the Fourier transform. This technique is useful because it allows us to transform a complicated problem — a boundary value problem — into a simpler problem which can often be approached with ODE techniques or even algebraically.;
;There are many excellet sources provided for this section, listed below.;
<ol style="list-style-type: decimal">
<li>;José Figueroa-O’Farrill’s wonderful <em>Integral Transforms</em> from <em>Mathematical Techniques III</em> at the University of Edinborough.;</li>
<li>;W.E Olmstead and V.A. Volpert’s <em>Differential Equations in Applied Mathematics</em> at Northwestern University.;</li>
<li>;J.J. Hoyt’s chapter on the <em>Mathematics of Diffusion</em> in his <em>Phase Transformations</em> text.;</li>
<li>;Paul Shewman’s <em>Diffusion in Solids</em>.;</li>
<li>;J.W. Brown and R.V. Churchill’s <em>Fourier Series and Boundary Values Problems</em>, 6<sup>th</sup> Edition.;</li>
</ol>
;The primary goal behind the Fourier transform is to solve a differential equation with some unknown function <span class="math inline">$$f$$</span>. We apply the transform (<span class="math inline">$$\mathscr{F}$$</span>) to convert the function into something that can be solved more easily: <span class="math inline">$$f \xrightarrow{\mathscr{F}} F$$</span>. The transformed function is often also represented using a <span class="math inline">$$\hat{f}$$</span>. We solve for <span class="math inline">$$F$$</span> and then perform an inverse Fourier transform (<span class="math inline">$$\mathscr{F}^{-1}$$</span>) to recover the solution for <span class="math inline">$$f$$</span>.;
;We find that Fourier <em>series</em> — which are used to when working with periodic functions — can be generalized to Fourier integral transforms (or Fourier transforms) when the period of the function becomes infinitely long. Let’s begin with the Fourier series an build on our results from our discussion above where we found that a continuous function <span class="math inline">$$f(x)$$</span> defined on some finite interval <span class="math inline">$$x \in[0,L]$$</span> and vanishing at the boundaries, <span class="math inline">$$f(0) = f(L) = 0$$</span> can be expanded as shown in <a href="#eq:DiffSolFull" reference-type="ref" reference="eq:DiffSolFull"><span class="math display">$eq:DiffSolFull$</span></a>.;
;The following derivation is adapted from Olmstead and Volpert. In general, we can attempt to represent <em>any</em> function that is periodic over period <span class="math inline">$$[0,L]$$</span> with a Fourier series of form:;
;<span class="math display">$f(x) = a_0 + \sum_{n=1}^\infty\left[a_n \cos{\frac{2 \pi n x}{L}} + b_n \sin{\frac{2 \pi n x}{L}}\right] \label{eq:GenSol}$</span>;
;However, we need to know how to find the coefficients <span class="math inline">$$a_0$$</span>, <span class="math inline">$$a_n$$</span>, and <span class="math inline">$$b_n$$</span> for this representation of <span class="math inline">$$f(x)$$</span>. For this analysis we must utilize the following integral identities:;
;<span class="math display">$\int_0^L{\sin{\frac{2 \pi n x}{L}}\cos{\frac{2 \pi n x}{L}}}dx= 0 \quad n,m = 1,2,3,...,$</span>;
;<span class="math display">$\int_0^L{\cos{\frac{2 \pi n x}{L}}\cos{\frac{2 \pi m x}{L}}} dx= \begin{cases} 0, \text{\,if} \quad n,m = 1,2,3,..., n\neq m\\ L/2, \text{\,if} \quad n = m = 1,2,3,...,\\ \end{cases}$</span>;
;<span class="math display">$\int_0^L{\sin{\frac{2 \pi n x}{L}}\sin{\frac{2 \pi m x}{L}}} dx = \begin{cases} 0, \text{\,if} \quad n,m = 1,2,3,..., n\neq m\\ L/2, \text{\,if} \quad n = m = 1,2,3,...,\\ \end{cases}$</span>;
;<span class="math display">$\int_0^L{\cos{\frac{2 \pi n x}{L}}} dx = \begin{cases} 0, \text{\,if} \quad n,m = 1,2,3,...,\\ L, \text{\,if} \quad n = 0\\ \end{cases}$</span>;
;<span class="math display">$\int_0^L{\sin{\frac{2 \pi n x}{L}}} dx = 0, \text{\,if} \quad n,m = 0,1,2,3,...,\\$</span>;
;These identities state the orthogonal properties of sines and cosines that will be used to derive the coefficients <span class="math inline">$$a_0$$</span>, <span class="math inline">$$a_n$$</span>, and <span class="math inline">$$b_n$$</span>. Recall that two functions are orthogonal on an interval if;
;<span class="math display">$\int_a^b f(x)g(x)dx = 0$</span>;
;We can therefore multiply Eq. <a href="#eq:GenSol" reference-type="ref" reference="eq:GenSol"><span class="math display">$eq:GenSol$</span></a> by <span class="math inline">$$\cos{\frac{2 \pi x}{L}}$$</span> (note <span class="math inline">$$n = 1$$</span>) and integrate over <span class="math inline">$$[0,L]$$</span>:;
;<span class="math display">\begin{aligned} \int_0^L f(x)\cos{\frac{2 \pi x}{L}} dx &amp;= a_0 \int_0^L \cos{\frac{2 \pi x}{L}} dx +\\ &amp;a_1 \int_0^L \cos{\frac{2 \pi x}{L}} \cos{\frac{2 \pi x}{L}} dx +b_1 \int_0^L \sin{\frac{2 \pi x}{L}} cos{\frac{2 \pi x}{L}} dx +\\ &amp;a_2 \int_0^L \cos{\frac{4 \pi x}{L}} \cos{\frac{2 \pi x}{L}} dx +b_2 \int_0^L \sin{\frac{4 \pi x}{L}} cos{\frac{2 \pi x}{L}} dx + ...\\\end{aligned}</span>;
;Applying the orthogonal properties of the integral products finds that all terms on the right-hand side of this equation are zero apart from the <span class="math inline">$$a_1$$</span> term. The equation therefore reduces to:;
;<span class="math display">$\int_0^L f(x)\cos{\frac{2 \pi x}{L}} dx = a_1 \int_0^L \cos{\frac{2 \pi x}{L}} cos{\frac{2 \pi x}{L}} dx= a_1\frac{L}{2}$</span>;
;and;
;<span class="math display">$a_1 = \frac{2}{L} \int_0^L f(x)\cos{\frac{2 \pi x}{L}} dx$</span>.;
;The other Fourier coefficients can be solved for in a similar manner, which yields the general solutions:;
;<span class="math display">\begin{aligned} a_0 &amp;= \frac{1}{L} \int_0^L f(x)\\ a_n &amp;= \frac{2}{L} \int_0^L f(x)\cos{\frac{2 n \pi x}{L}}dx \quad (n = 1,2,3...)\\ b_n &amp;= \frac{2}{L} \int_0^L f(x)\sin{\frac{2 n \pi x}{L}}dx\quad (n = 1,2,3...)\end{aligned}</span>;
;To this point we’ve solved, generally, for the coefficients of a Fourier series over a finite interval. This is useful, but we my want to use the full complex form of the Fourier series in later discussion of the Fourier transform. We know that <a href="https://en.wikipedia.org/wiki/Euler&#39;s_formula#Relationship_to_trigonometry">Euler’s formula</a> can be used to express trigonometric functions with the complex exponential function:;
;<span class="math display">\begin{aligned} \sin{\frac{2 \pi n x}{L}} &amp;= \frac{1}{2i}\left(e^{i\frac{n \pi x}{L}}+e^{-i\frac{n \pi x}{L}}\right) \nonumber \\ \cos{\frac{2 \pi n x}{L}} &amp;= \frac{1}{2}\left(e^{i\frac{n \pi x}{L}}+e^{-i\frac{n \pi x}{L}}\right) \label{eq:Euler}\end{aligned}</span>;
;and we define the wavenumbers to be:;
;<span class="math display">$k_n = 2 \pi n/L \quad n=0,1,2,..., \label{eq:Wavenumber}$</span>;
;and therefore Eq. <a href="#eq:Euler" reference-type="ref" reference="eq:Euler"><span class="math display">$eq:Euler$</span></a> is written as:;
;<span class="math display">\begin{aligned} \sin{k_n x} &amp;= \frac{1}{2i}\left(e^{i k_n x}+e^{-i k_n x}\right) \nonumber \\ \cos{k_n x} &amp;= \frac{1}{2}\left(e^{i k_n x}+e^{-i\ k_n x}\right) \label{eq:Euler}\end{aligned}</span>;
;This allows us to write the complete Fourier series (Substitute Eq. <a href="#eq:Euler" reference-type="ref" reference="eq:Euler"><span class="math display">$eq:Euler$</span></a> into <a href="#eq:GenSol" reference-type="ref" reference="eq:GenSol"><span class="math display">$eq:GenSol$</span></a>):;
;<span class="math display">$f_{L}(x) \sim \sum_{n=-\infty}^{\infty} c_n e^{i k_n x} \label{eq:ComplexFourierSeries}$</span>;
;For convenience, we’ll define the integral to be <span class="math inline">$$[-\frac{L}{2},\frac{L}{2}]$$</span>. The <span class="math inline">$$\sim$$</span> notation indicates that the series representation is an approximation, and the <span class="math inline">$$L$$</span> represents the period over which the series is applied. The orthogonality condition holds for these complex exponential and its complex conjugate over this interval:;
;<span class="math display">$\int_{-\frac{L}{2}}^{\frac{L}{2}} e^{i k_n x} e^{-i k_m x} dx = \begin{cases} 0, \text{\,if} \quad n \neq m\\ L, \text{\,if} \quad n = m\\ \end{cases}$</span>;
;Therefore, if we multiply Eq. <a href="#eq:ComplexFourierSeries" reference-type="ref" reference="eq:ComplexFourierSeries"><span class="math display">$eq:ComplexFourierSeries$</span></a> by <span class="math inline">$$e^{-i k_m x}$$</span> and solve we find the only term that survives is when <span class="math inline">$$n=m$$</span>:;
;<span class="math display">$\int_{-\frac{L}{2}}^{\frac{L}{2}} f_L(x) e^{-i k_m x} dx = L c_m.$</span>;
;We can revert the place-keeping subscript <span class="math inline">$$m$$</span> to <span class="math inline">$$n$$</span> and solve for the Fourier coefficients to find:;
;<span class="math display">$c_n = \frac{1}{L}\int_{-\frac{L}{2}}^{\frac{L}{2}} f_L(x) e^{-i k_n x} dx \quad \text{for\,} n = 0, \pm1, \pm2,... \label{eq:FourierComps}$</span>;
;Alright, we’ve defined an interval <span class="math inline">$$[-L/2, L/2]$$</span>, but we want to investigate this interval as <span class="math inline">$$L \rightarrow \infty$$</span> in an attempt to eliminate the periodicity of the Fourier series. If <span class="math inline">$$L \rightarrow \infty$$</span>, we know that our function <span class="math inline">$$f_L(x)$$</span> will be non-zero over only a very small range — say an interval of <span class="math inline">$$[-a/2, a/2]$$</span> where <span class="math inline">$$a &lt;&lt; L$$</span>. This means that;
;<span class="math display">$f_L{x} = \begin{cases} 1 \quad \text{for} |x|&lt;a/2\\ 0 \quad \text{for} a/2 &lt; |x| &lt; L/2 \end{cases} \label{eq:Linfty}$</span>;
;This function is zero except for a small bump at the origin of height 1 and width <span class="math inline">$$a$$</span>. Let’s assess our function over the non-zero interval:;
;<span class="math display">$\int_{-\frac{a}{2}}^{\frac{a}{2}} e^{-i k_n x} dx = \frac{\sin k_n a/2}{k_n L/2} = \frac{\sin n \pi a/L}{n \pi}. \label{eq:Thing}$</span>;
;But how does this allow us to consider a continuous Fourier integral transform? Well, we need to consider the function <span class="math inline">$$f_L(x)$$</span> as <span class="math inline">$$L \rightarrow \infty$$</span>. By doing so, we drive all the harmonics of the Fourier function — apart from the central one — out beyond infinity. <span class="math inline">$$f(x)$$</span> then has a single bump of width <span class="math inline">$$L$$</span> centered at the origin. That is, the separation between the <span class="math inline">$$n$$</span> harmonics goes to zero, and the representation contains all harmonics. Further, as <span class="math inline">$$L \rightarrow \infty$$</span> we no longer have a periodic function and we can i.e., the fundamental period becomes so large that we no longer have non-periodic function at all.;
;This allows us to transition from a discrete description to a continuum and our Fourier sum can now be described as a Fourier integral. Recall the Fourier series (Eq/ <a href="#eq:ComplexFourierSeries" reference-type="ref" reference="eq:ComplexFourierSeries"><span class="math display">$eq:ComplexFourierSeries$</span></a>):;
;<span class="math display">$f_{L}(x) \sim \sum_{n=-\infty}^{\infty} c_n e^{i k_n x}$</span>;
;Which we now write as:;
;<span class="math display">$f_{L}(x) \sim \sum_{n=-\infty}^{\infty} \frac{\Delta k}{2\pi/L} c_n e^{i k_n x} = \sum_{n=-\infty}^{\infty} \frac{\Delta k}{2\pi} L c_n e^{i k_n x} \label{eq:Thing2}$</span>;
;where <span class="math inline">$$\Delta k = 2\pi/L$$</span> is the difference between successive values of <span class="math inline">$$k_n$$</span>. We now define a function <span class="math inline">$$F(k)$$</span> as;
;<span class="math display">$F(k) \equiv \Lim{L \rightarrow \infty} L c_n = \Lim{L \rightarrow \infty} L c_{kL/2 \pi} \label{eq:DefFourierTransform}$</span>;
;Combining this definition with Eq. <a href="#eq:Thing" reference-type="ref" reference="eq:Thing"><span class="math display">$eq:Thing$</span></a> gives:;
;<span class="math display">$F(k) = \frac{\sin{(ka/2)}}{k/2}$</span>;
;and as <span class="math inline">$$L \rightarrow \infty$$</span>, Eq. <a href="#eq:Thing2" reference-type="ref" reference="eq:Thing2"><span class="math display">$eq:Thing2$</span></a> goes as;
;<span class="math display">\begin{aligned} f(x) &amp;= \Lim{L \rightarrow \infty} \sum_{n=-\infty}^{\infty} \frac{\Delta k}{2\pi} L c_n e^{i k_n x} \\ \Aboxed{f(x) &amp;= \frac{1}{2\pi}\int_{-\infty}^{\infty}F(k) e^{ikx} dk} \label{eq:FourierInversion}\end{aligned}</span>;
;The function <span class="math inline">$$F(x)$$</span> is the Fourier transform of <span class="math inline">$$f(x)$$</span> and Eq. <a href="#eq:FourierInversion" reference-type="ref" reference="eq:FourierInversion"><span class="math display">$eq:FourierInversion$</span></a> as a continuous superposition of Fourier component, with each component now represented by a <em>continuous</em> function <span class="math inline">$$f(x)$$</span>. Similarly, from Eq. <a href="#eq:DefFourierTransform" reference-type="ref" reference="eq:DefFourierTransform"><span class="math display">$eq:DefFourierTransform$</span></a> and Eq. <a href="#eq:FourierComps" reference-type="ref" reference="eq:FourierComps"><span class="math display">$eq:FourierComps$</span></a>:;
;<span class="math display">\begin{aligned} \Aboxed{F(k) &amp;= \int_{\infty}^{\infty} f(x) e^{-i k x} dx} \label{eq:FourierTransform}\end{aligned}</span>;
;Eq. <a href="#eq:FourierTransform" reference-type="ref" reference="eq:FourierTransform"><span class="math display">$eq:FourierTransform$</span></a> is non-periodic analog to the expression for deriving the Fourier coefficients <span class="math inline">$$c_n$$</span> in the periodic case. We call this function the <em>Fourier (Integral) Transform</em> of the function <span class="math inline">$$f(x)$$</span> and it is often written as;
;<span class="math display">$F(k) \equiv \mathscr{F}\big[f(t)\big] \label{eq:InversionFormula}$</span>;
;Similarly, Eq. <a href="#eq:InversionFormula" reference-type="ref" reference="eq:InversionFormula"><span class="math display">$eq:InversionFormula$</span></a> is known as the <em>Inversion Formula</em> or <em>Inverse Fourier (Integral) Transform</em> and is used to return the Fourier-transformed function from frequency space. It is often represented as:;
;<span class="math display">$f(x) \equiv \mathscr{F}^{-1}\big[F(k)\big]$</span>;
<div class="displayquote">
;<strong>Example:</strong> Let’s do a simple Fourier transform of a <a href="https://en.wikipedia.org/wiki/Square-integrable_function">square-integrable function</a> (this condition establishes that the function has a Fourier transform). We’ll try a square pulse over the interval <span class="math display">$-,$</span>:;
;<span class="math display">$f(x) = \begin{cases} 1, \text{\,if\,} |x| &lt; \pi\\ 0, \text{\,otherwise}\\ \end{cases}$</span>;
;We take the Fourier integral transform over the non-zero interval:;
;<span class="math display">\begin{aligned} F(x) &amp;= \frac{1}{2 \pi}\int_{-\infty}^{\infty} f(x) e^{-i k x} dx\\ &amp;= \frac{1}{2 \pi}\int_{-\pi}^{\pi} e^{-i k x} dx\\ &amp;= -\frac{1}{2i \pi k} e^{-i k x}\Big|_{-\pi}^{\pi}\\ &amp;= -\frac{1}{2i \pi k} (e^{-i k \pi}-e^{i k \pi})\\ &amp;= \frac{\sin{\pi k }}{\pi k} \end{aligned}</span>;
</div>
;Integral transforms will prove massively useful in solving boundary value problems in the MAT<code>_</code>SCI core.;
<div class="displayquote">
;<strong>Example</strong> One example is diffusion in the thin film problem. Imagine that there is thin region of finite width with high concentration of some species <strong>B</strong> situated between two “infinite” (thick) plate of pure <strong>A</strong> (after Hoyt, 1-6). Diffusion from the thin film is allowed to proceed over time into the adjacent plates. The thin film is centered at <span class="math inline">$$x = 0$$</span>, so the concentration profile will be an even function <span class="math inline">$$[\varphi(x,t) = \varphi(-x,t)]$$</span> How do we solve for the evolution of the concentration profile over time?;
;This is an example in which we want to interpret this geometry as one with infinite period. When doing so we should consider using a Fourier integral transform.;
;From the section above, we understand that the concentration profile <span class="math inline">$$c(x,t)$$</span> can be obtained from the inverse transform of the Fourier space function <span class="math inline">$$\Phi(k,t)$$</span>. Above, we derived the full Fourier integral transform, but here we know that the function is even, and so we can perform a Fourier Cosine integral transform, which simplifies the mathematics and allows us to perform the transform from <span class="math inline">$$[0,\infty]$$</span>. The following derivation is after Hoyt, Ch. 1-8.;
;<span class="math display">\begin{aligned} f(x) = \frac{1}{\pi} \int_{-\infty}^{\infty} F(x) \cos{(kx)} dk\\ F(x) = \frac{1}{\pi} \int_0^{\infty} f(x) \cos{(kx)} dx\end{aligned}</span>;
;In our case, we have:;
;<span class="math display">\begin{aligned} \varphi(x,t) = \frac{1}{\pi} \int_{-\infty}^{\infty} \Phi(k,t) \cos{(kx)} dk\\ \Phi(k,t) = \frac{1}{\pi} \int_0^{\infty} \varphi(x,t) \cos{(kx)} dx \label{eq:OddSol1}\end{aligned}</span>;
;The utility of utilizing the Fourier intergral transform is that the PDEs in space and time can be converted to ODEs in the time domain alone, which are often much easier to solve. The ability for us to do this hinges on a key property of a Fourier transform that relates the Fourier transform of the <span class="math inline">$$n$$</span><sup>th</sup> derivative of a function to the Fourier transform of the function itself.;
;<span class="math display">$\mathscr{F}\big[f^{(n)}(x)\big](k) = (ik)^{n}\mathscr{F}\big[f(x)\big](k) \label{eq:}$</span>;
;This, as you will see, allows us to convert a PDE in <span class="math inline">$$t$$</span> and <span class="math inline">$$x$$</span> to a ODE in <span class="math inline">$$t$$</span> alone. Let’s apply this property to the 1D diffusion equation. First, we know we are performing a Fourier transform in <span class="math inline">$$x$$</span>, so the time derivative can be pulled from the integral on the left-hand side of the equation.;
;<span class="math display">\begin{aligned} \mathscr{F}[\Partial{}{\varphi(x,t)}{t}] &amp;= \frac{1}{\pi}\int_{0}^{\infty} \Partial{}{\varphi(x,t)}{t} \cos{(-ikx)} dx\\ &amp;= \frac{1}{\pi}\Partial{}{}{t}\left[\int_{0}^{\infty}\varphi(x,t)\cos{(-ikx)} dx\right]\\ &amp;= \frac{1}{\pi}\Partial{}{}{t}\left[\Phi(k,t)\right]\end{aligned}</span>;
;and the right-hand side of the equation is:;
;<span class="math display">\begin{aligned} \mathscr{F}\big[D\Partial{2}{}{x}\varphi(x,t)\big] &amp;= (ik)^2D\mathscr{F}\big[\varphi(x,t)\big]\\ &amp;= -D\frac{k^2}{\pi}\int_0^{\infty}\varphi(x,t)\cos{(-ikx)} dx\\ &amp;= -D\frac{k^2}{\pi} \Phi(k,t)\end{aligned}</span>;
;and therefore:;
;<span class="math display">\begin{aligned} \frac{1}{\cancel{\pi}}\Partial{}{}{t}\left[\Phi(k,t)\right] &amp;= -D\frac{k^2}{\cancel{\pi}} \Phi(k,t)\\ \Aboxed{\Partial{}{}{t}\left[\Phi(k,t)\right] &amp;= -Dk^2 \Phi(k,t)}\end{aligned}</span>;
;This differential equation can be solved by inspection<a href="#fn3" class="footnote-ref" id="fnref3"><sup>3</sup></a> to be:;
;<span class="math display">$\Phi(k,t) = A^{0}(k) e^{-k^2Dt} \label{eq:Sol1}$</span>;
;Where <span class="math inline">$$A^0(k)$$</span> is a constant that that defines the Fourier space function <span class="math inline">$$\Phi$$</span> at <span class="math inline">$$t=0$$</span>. To fully solve this problem and derive <span class="math inline">$$\varphi(x,t)$$</span> we must next solve this value <span class="math inline">$$A^0(k)$$</span> and apply the inverse Fourier transform.;
;Let us consider our initial condition. Our concentration profile can be modeled as a <span class="math inline">$$delta$$</span>-function concentration profile, <span class="math inline">$$\varphi(x,0) = \alpha \delta(x)$$</span>) fixed between two infinite plates, where the integrated concentration is <span class="math inline">$$\alpha$$</span>:;
;<span class="math display">$\int_{\infty}^{\infty} \varphi(x,0)dx = \int_{\infty}^{\infty} \alpha \delta(x) dx = \alpha$</span>;
;The constant <span class="math inline">$$A^0(k)$$</span> is, at <span class="math inline">$$t = 0$$</span>, defined by Eq. <a href="#eq:Sol1" reference-type="ref" reference="eq:Sol1"><span class="math display">$eq:Sol1$</span></a> and Eq. <a href="#eq:OddSol1" reference-type="ref" reference="eq:OddSol1"><span class="math display">$eq:OddSol1$</span></a> to be:;
;<span class="math display">\begin{aligned} \Phi(k,t) &amp;= A^{0}(k)e^{0}\\ &amp;= \frac{1}{2\pi}\int_{0}^{\infty}\varphi(x,t) \cos{(kx)} dx\end{aligned}</span>;
;Inserting the delta function for (x,t) yields:;
;<span class="math display">$A^{0}(k) = \frac{\alpha}{\pi}\int_{0}^{\infty} \delta(x) \cos{(kx)} dx$</span>;
;We’ll take advantage of the evenness of this function and instead integrate over <span class="math inline">$$[\infty, \infty]$$</span>. This allows us to avoid the messiness at <span class="math inline">$$x=0$$</span> as well circumvent using the Heaviside step function.;
;<span class="math display">\begin{aligned} A^{0}(k) &amp;= \frac{\alpha}{2\pi}\int_{-\infty}^{\infty} \delta(x) \cos{(kx)} dx\\ A^{0}(k) &amp;= \frac{\alpha}{2\pi} \cos{(0)} dx\\ %Deltafunction fundamental property \Aboxed{A^{0}(k) &amp;= \frac{\alpha}{2\pi}}\end{aligned}</span>;
;Finally, now that we have <span class="math inline">$$A^{0}$$</span>, we must perform the inverse transformation to find the expression for <span class="math inline">$$\varphi(x,t)$$</span>.;
;<span class="math display">\begin{aligned} \varphi(x,t) &amp;= \frac{\alpha}{2\pi}\int_{-\infty}^{\infty} e^{-k^2Dt} \cos{(kx)} dk\\ \intertext{This can be completed through integration by parts, trigonometric identities, and completing the square... for explicit step-by-step analysis use Wolfram|Alpha or \href{http://www.integral-calculator.com/}{Scherfgen&#39;s Integral Calculator}. Let&#39;s state the result:} \varphi(x,t) &amp;= \frac{\alpha}{2\sqrt{\pi D t}} e^{-x^2/4Dt}\end{aligned}</span>;
;This is a Gaussian distribution centered at <span class="math inline">$$x = 0$$</span> and which increases in width with time. This is good — it certainly makes sense intuitively.;
</div>
</div>
<div id="bessel-functions" class="section level4">
<h4>Bessel Functions</h4>
</div>
<div id="legendre-polynomials" class="section level4">
<h4>Legendre Polynomials</h4>
</div>
<div id="eulers-method" class="section level4">
<h4>Euler’s Method</h4>
</div>
</div>
<div id="solving-second-order-linear-odes-release-tbd" class="section level3">
<h3>Solving Second-order Linear ODEs (Release TBD)</h3>
<ol style="list-style-type: decimal">
<li>;Principle of Superposition;</li>
<li>;Series Solutions;</li>
</ol>
</div>
<div id="laplace-transforms-release-tbd" class="section level3">
<h3>Laplace Transforms (Release TBD)</h3>
</div>
<div id="stability-theory-release-tbd" class="section level3">
<h3>Stability Theory (Release TBD)</h3>
</div>
</div>
</div>
<div class="footnotes">
<hr />
<ol>
<li id="fn1">;sometimes, an inexact differential will be denoted as <span class="math inline">$$\delta f$$</span><a href="#fnref1" class="footnote-back">↩︎</a>;</li>
<li id="fn2">;these may be combined to a 1-quarter class in the future<a href="#fnref2" class="footnote-back">↩︎</a>;</li>
<li id="fn3">;If you don’t see this, that’s fine, review <em>Separation of Variables</em> — this equation is separable<a href="#fnref3" class="footnote-back">↩︎</a>;</li>
</ol>
</div>

</div>

<script>

// add bootstrap table styles to pandoc tables
function bootstrapStylePandocTables() {
$('tr.odd').parent('tbody').parent('table').addClass('table table-condensed'); }$(document).ready(function () {
bootstrapStylePandocTables();
});

</script>

<!-- tabsets -->

<script>
$(document).ready(function () { window.buildTabsets("TOC"); });$(document).ready(function () {
$('.tabset-dropdown > .nav-tabs > li').click(function () {$(this).parent().toggleClass('nav-tabs-open');
});
});
</script>

<!-- code folding -->

<!-- dynamically load mathjax for compatibility with self-contained -->
<script>
(function () {
var script = document.createElement("script");
script.type = "text/javascript";
script.src  = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML";
})();
</script>

</body>
</html>


## 4 Introduction

Materials science is the investigation of the relationships between the property, structure and processing of materials, with the goals of optimizing performance of some system. These relationships are often illustrated with the materials science tetrahedron shown in Figure 4.1.
Figure 4.1: The materials science tetrahedron.

## 5 Bonding

### 5.1 Outcomes

• Evaluate interatomic energy curves to derive equilibrium interatomic spacing and bond energy (using calculus and graphically).
• Identify the mechanistic contributions to repulsive and attractive forces/energies in interatomic force/energy curves.
• Identify different types of bonding with materials class.
• Understand characteristics of different bonding models, including strength, directionality.
• Infer general properties from bond type.
• Differentiate between predominate ionic or covalent bond character by assessing electronegativity difference.

### 5.2 General Concepts and the Role of the Interatomic Potential

A generic feature of all bonds is that they can be described by a interatomic potential of the sort shown in Figure 5.1. This potential can be viewed as a sum of an attractive portion that draws the atoms close to one another, and a repulsive, short-range interaction that maintains a preferred separation comparable to the atomic size. Primary bonds are strong bonds with a deep potential well, and include covalent, ionic and metallic bonds. Secondary bonds, including Van der Waals interactions and hydrogen bonds, are much weaker, and with a shallower potential well.
The following properties are directly related to the nature of the interatomic potential:
##### Thermal expansion
Thermal expansion is directly connected to the asymmetry of the potential well, as illustrated in Figure 5.2b.
##### Elastic modulus (stiffness)
The higher curvature of the well, the larger the stiffness
Question: Calculate the equilibrium spacing for the following interatomic potential:
$E=\frac{B}{{r}^{3}}-\frac{A}{r}$
Answer: We differentiate the energy to get the force, $F$:
$F=\frac{dE}{dr}=-\frac{3B}{{r}^{4}}+\frac{A}{{r}^{2}}$
At equilibrium, the force is equal to zero, so we have:
$A=\frac{3B}{{r}^{2}},\phantom{\rule{6px}{0ex}}\phantom{\rule{6px}{0ex}}r=\sqrt{3B/A}$
Figure 5.2: Illustration of the relationship between interatomic potentials and some relevant material properties.
With bonding, everything starts with periodic table, shown in Figure .5.3. At a simple level, the type of bonding between atoms is determined by their locations on the periodic table.
Electronegativity arises due to elements’ energetic favorability to reach a stable electron configuration.

### 5.3 Ionic Bonding

Ionic bonding typically occurs between a metal and a non-metal, and involves a transfer of an electron from an atom with low negativity to create cations (+ charge) to an atom with high electronegativity to produce anions (- charge). Examples are shown in Figure 5.4, where the electronegativities of the different elements are shown. 'Pure' ionic bonding occurs in systems where there is a large difference in the electronegativity between the constituent elements, typically 2.7 or more. In practical terms this means that a large fraction of ionic materials have oxygen, fluorine, chlorine or bromine as the anion, corresponding repectively to oxides, fluorides, chlorides and bromides.
Figure 5.4: Examples of some common ionic solids, with the corresponding electronegativity values of the elements from which they are formed.
Figure 5.5:

### 5.4 Covalent Bonding

Similar electronegativity and electrons are shared in order to minimize energy Bonds are directional Bonds occur between specific atoms participating in localized electron sharing Common in non-metallic compounds and elements Small differences in electronegativity facilitate sharing Right side of the periodic table (excluding noble gases) – B, C, Si, Ge, Cl2, O2…)
Hugely varying properties: Strong [C(diamond, graphene)] to relatively weak (I2) Frequently brittle, electrically insulating/semiconducting/conducting, transparent/opaque Other examples, Si, InSb, SiC
Figure 5.6: Methane (CH${}_{4}$) with tetrahedral coordination resulting from the sp${}^{3}$ hybridized orbital.

### 5.5 Metallic Bonding

Metallic Bonding is found in metals and their alloys. The valence electrons are delocalized to form an “electron cloud/sea/gas” or “Fermi liquid”, as illustrated in Figure 5.7. These are referred to as the the conduction electrons and are shared between all of the atoms in the material. Positive ionic cores held together by electron “glue”. As with ionic bonding, metallic bonding is non-directional, meaning that if we rotate an atom core, it doesn't affect the nature of the interaction. The average electronegativity of the atoms in metallic systems is generally low, so electrons are easily donated from the individual atoms to the electron 'sea'.
Responsible for: Ducility/mealleability (W6) Conduction of heat/welectricity (W8) Shininess/opacity (W9) Thermal conductivity (W9)
Figure 5.7: Schematic representation of metallic bonding.

### 5.6 Mixed Bond Character

Figure 5.8: Percent ionic character as a function of the electronegativity difference between atoms.

### 5.7 Hydrogen Bonds

Figure 5.9: Schematic of Hydrogen Bonds.

Figure 5.10:

## 6 Crystal Structures

The animation below illustrates 3 crystal structures of metals: face centered cubic (fcc), body centered cubic (bcc) and simple cubic (sc):
;;;


## 7 Dislocations

Plastic deformation of a crystalline solid occurs by the motion of dislocations, which are one dimensional defects in the crystal structure. In general, deformation of a material occurs by shear along specified planes called slip planes. An illustration of this effect in single crystal aluminum is shown in Figure 7.1. The material in this image is being deformed in tension, but the slip occurs along suitably oriented planes that are experiencing a high degree of shear.
Figure 7.1: Slip bands in single crystal aluminum undergoing tensile deformation.
$\begin{array}{cc}{\tau }_{rss}=\sigma cos\phi cos\lambda & \left(7.1\right)\end{array}$
where $\phi$ is the angle between the tensile axis and the slip plane normal, $\stackrel{\to }{n}$, and $\lambda$ is the angle between the tensile axis and the slip direction, $\stackrel{\to }{d}$.
Values of this quantity for different single crystals are shown in Table 1. For the materials with close packed crystals structures on this list (fcc and hcp), the value of ${\tau }_{crss}$ is about four orders of magnitude less than the shear modulus, $G$.
.
A note about units of stress:
The SI unit of stress is a pascal (Pa), or N/m${}^{2}$. We generally use SI units in this text, but English units (pounds per square inch, or psi), are still often used in engineering fields. One useful number to remember is that atmospheric pressure is $\approx 1{0}^{5}$ Pa, or 14.7 psi. The exact conversion is that 1 psi = 6895 Pa = 6.895 kPa.
Exercise: From the critical resolved shear stress for single crystal aluminum shown in Table 1, calculate the minimum force (in pounds) that must be applied to a one half inch diameter rod of single crystal Al to deform plastically.
Solution: The critical resolved shear stress for pure, single crystal Al is 148 psi, so we need to figure out what tensile stress on the sample will produce this value for the resolved shear stress, ${\tau }_{rss}$. The smallest value of $\sigma$ for which ${\tau }_{rss}$ is equal to the critical value of 148 occurs for the slip system with $\phi =\lambda =4{5}^{○}$, so from Eq. 7.1 we get $\sigma =2{\tau }_{rss}=296$. Multiplying by the cross sectional area of the rod gives:
$F\phantom{\rule{6px}{0ex}}=\left(296\phantom{\rule{6px}{0ex}}psi\right)\cdot \pi \cdot {\left(0.25\phantom{\rule{6px}{0ex}}in\right)}^{2}=58\phantom{\rule{6px}{0ex}}pounds$
This is a pretty small force, and is much less than the force required to deform a stock piece of aluminum that I would find in the machine shop.
Why is the force to deform a single crystal so low? We'll start by considering what we would expect for the critical resolved shear stress if the shear deformation were to occur by the sliding of atomic planes over one another, as shown conceptually in Figure 7.3. We refer to the stress required to slide these planes over one another as the dislocation-free critical resolved shear stress, ${\tau }_{crss}^{0}.$
Figure 7.3: Sliding of close packed planes on top of one another.
We'll start by reminding ourselves of the definition of a shear strain, illustrated in Figure 7.4. In shear deformation, two parallel surfaces separated by a distance, $d$, are translated by an amount $u$ with respect to one another. If the deformation occurs in the x-y plane, we refer to the shear strain as ${e}_{xy}$, which is given by:
$\begin{array}{cc}{e}_{xy}=\frac{u}{d}& \left(7.2\right)\end{array}$
For a linearly elastic material, the shear stress, $\tau$ is proportional to ${e}_{xy}$, with the shear modulus $G$ defined as the ratio of shear stress over shear strain:
$\begin{array}{cc}\tau =G\frac{u}{d}& \left(7.3\right)\end{array}$
In Figure 7.5 show a schematic representation of the stress as a function of displacement for the atomic planes shown in Figure 7.3. The stress function has the following features:
1. The stress is a periodic function, with the stress repeating every time the displacement is increased by an amount equal to $b$, the distance between atoms along the slip direction.
2. The stress is equal to zero at the stable equilibrium positions at $u=0,\phantom{\rule{6px}{0ex}}b,\phantom{\rule{6px}{0ex}}2b$, etc.
3. For $u the stress is positive because we need to apply a stress to move the atoms out of their stable equilibrium positions.
4. At $u=b/2$ the system is at an unstable equilibrium. The stress is also equal to zero at this position, but the equilibrium is unstable because any slight perturbation in the displacement will cause the atomic plane to fall back into an equilibrium position at $u=0$ or $u=b$.
5. The maximum stress is at $u=b/4$ . The stress actually reverses sign for $u>b/2$, since a stress must be applied to avoid having the atoms fall into the equilibrium position at $u=b$.
Figure 7.5: Schematic representation of the stress vs. displacement as the atomic planes in Figure 7.3 slide over one another.
The simplest mathematical expression for the shear stress that has the right periodicity is a sinusoidal function:
Now we need to figure out what the constant $a$ is in terms of actual material properties. For small displacements the material is in the linear regime, and we can use the definition of the shear modulus (Eq.