Lecture 22
Definition 11.4.1. If $X$ is a vector space, then we say a function $\snorm{\pd} \colon X \to \R$ is a norm if
A vector space equipped with a norm is called a normed vector space.
Definition 11.4.2.
Let $x=(x_1,x_2,\ldots,x_n),$ $y=(y_1,y_2,\ldots,y_n) \in \R^n$ be two vectors,
the dot product
is defined as
$ \ds x \pd y := \sum_{j=1}^n x_j\, y_j . $
Remark 1: The dot product is bilinear. That is, it is linear in each variable separately. In other words, if $y$ is fixed, the map $x\mapsto x\pd y$ is a linear map from $\R^n$ to $\R$. Similarly, if $x$ is fixed, the map $y\mapsto x\pd y$ is a linear.
Remark 2: It is also simetric. That is, $x\pd y = y\pd x.$
Definition 11.4.3. For $x=(x_1,x_2,\ldots,x_n) \in \R^n,$ the euclidean norm is defined as
$\ds \snorm{x} :=$ $ \snorm{x}_{\R^n} $ $:= \sqrt{x \pd x} \qquad \qquad \qquad \qquad \;\, $
$\ds \;\;\;\, \qquad \qquad =\sqrt{(x_1)^2+(x_2)^2 + \cdots + (x_n)^2}.$
It is easy to see that the euclidean norm satisfies properties 1 and 2 in Definition 11.4.1.
The triangle inequality follows from
$\snorm{x+y}^2 $ $ = x \pd x + y \pd y + 2$ $ (x \cdot y)$ $\qquad \qquad \qquad\quad \quad$
$\qquad \leq \snorm{x}^2 + \snorm{y}^2 + 2 $ $\snorm{x} \,\snorm{y}$ $= {\bigl(\snorm{x} + \snorm{y}\bigr)}^2 .$
Theorem 11.4.1. (Cauchy-Schwarz inequality) Let $x, y \in \R^n,$ then \begin{equation*} \sabs{x \pd y} \leq \snorm{x} \, \snorm{y} = \sqrt{x\pd x}\, \sqrt{y\pd y}, \end{equation*} with equality if and only if $x = \lambda y$ or $y = \lambda x$ for some $\lambda \in \R$.
If $x=0$ or $y = 0,$ the result follows immediatly. So assume $x\not= 0$ and $y \not= 0.$
If $x$ is a scalar multiple of $y,$ that is $x = \lambda y$ for some $\lambda \in \R$, then the theorem holds with equality:
$\sabs{ x \pd y } $ $ = \sabs{\lambda y \pd y} $ $ = \sabs{\lambda} \, \sabs{y\pd y} $ $= \sabs{\lambda} \, \snorm{y}^2 $
$= \snorm{\lambda y} \, \snorm{y} $ $= \snorm{x} \, \snorm{y} .\;\;\; $
Now consider fixed $x$ and $y,$ and the variable $t.$ Then we have
$\snorm{x+ty}^2 $ $ = (x+ty) \pd (x+ty) $
$ \;\;\,\quad \qquad = x \pd x + x \pd ty + ty \pd x + ty \pd ty\;$ (bilinearity)
$\;\;\,\quad \qquad= \snorm{x}^2 + 2t(x \pd y) + t^2 \snorm{y}^2 ,\; $ (simetry)
which is a polynomial of degree 2. If $x$ is not a scalar multiple of $y,$ then $\snorm{x+ty}^2 > 0$ for all $t.$ So the polynomial $\snorm{x+ty}^2$ is never zero.
Since $\snorm{x+ty}^2>0.$ Elementary algebra says that the discriminant must be negative. That is
$4 {(x \pd y)}^2 - 4 \snorm{x}^2\snorm{y}^2 \lt 0, $
or in other words,
${(x \pd y)}^2 \lt \snorm{x}^2\snorm{y}^2.\; \bs$
Standard distance in $\R^n$
The distance $$d(x,y) \coloneqq \snorm{x-y}$$ is the standard distance (standard metric) on $\R^n$ that we used when we talked about metric spaces.
Operator norm
Definition 11.4.4. Let $A \in L(X,Y).$ Define \begin{equation*} \snorm{A} := \sup \bigl\{ \snorm{Ax} : x \in X \text{ with } \snorm{x} = 1 \bigr\} . \end{equation*} The number $\snorm{A}$ (possibly $\infty$) is called the operator norm.
In particular, the norm operator is a norm for finite-dimensional spaces. When it is necessary to emphasize which norm we are talking about, we may write it as
$\ds\snorm{A}_{L(X,Y)}.$
Operator norm
For example, if $X=\R^1$ with norm $\snorm{x}=\sabs{x},$ we think of elements of $L(X)$ as multiplication by scalars: $$x \mapsto ax.$$ If $\snorm{x} =\sabs{x}=1,$ then $\sabs{a x} = \sabs{a},$ so the operator norm of $a$ is $\sabs{a}.$
Operator norm
By linearity,
$\ds \bigg|\bigg|A \frac{x}{\snorm{x}}\bigg|\bigg|$
$\ds = \frac{\snorm{Ax}}{\snorm{x}}$
for all nonzero $x \in X$.
The vector $\ds\frac{x}{\snorm{x}}$ is of norm 1.
Therefore
$\snorm{A} = \sup \bigl\{ \snorm{Ax} : x \in X \text{ with } \snorm{x} = 1 \bigr\} $
$= \ds\sup_{\substack{x \in X\\x\neq 0}} \frac{\snorm{Ax}}{\snorm{x}} .\qquad \qquad\qquad \; $
Operator norm
👉 $\snorm{A} = \ds\sup_{\substack{x \in X\\x\neq 0}} \frac{\snorm{Ax}}{\snorm{x}} $
Assuming $\snorm{A}$ is not infinity, this implies, that for every $x \in X,$
$\ds\snorm{Ax} \leq \snorm{A} \snorm{x} .$
From the definition that $\snorm{A} = 0$ if and only if $A = 0,$ where by $A=0$ we mean that $A$ takes every vector to the zero vector.
Operator norm
👉 $\snorm{A} = \ds\sup_{\substack{x \in X\\x\neq 0}} \frac{\snorm{Ax}}{\snorm{x}} $
What would be the operator norm
of the identity operator? 🤔
$\snorm{I} $ $ = \ds\sup_{\substack{x \in X\\x\neq 0}} \frac{\snorm{Ix}}{\snorm{x}} $ $ = \ds \sup_{\substack{x \in X\\x\neq 0}} \frac{\snorm{x}}{\snorm{x}} $ $ = 1. $ 😃
Operator norm
Theorem 11.4.2. Let $X$ and $Y$ be normed vector spaces. Suppose that $X$ is finite-dimensional. If $A \in L(X,Y),$ then $\snorm{A} \lt \infty,$ and $A$ is uniformly continuous.
Remark: To prove that $A$ is uniformly continuous, we can prove that it is Lipschitz continuous . That is, we need to prove that there exists a $K>0$ such that
$\snorm{Av - Aw }\leq \snorm{A} \, \snorm{v-w} \;$ for all $\;v,w\in X.$
Theorem 11.4.2. Let $X$ and $Y$ be normed vector spaces. Suppose that $X$ is finite-dimensional. If $A \in L(X,Y)$, then $\snorm{A} \lt \infty$, and $A$ is uniformly continuous.
Proof. Assume $X = \R^n.$ 😃 Let $\{ e_1,e_2,\ldots,e_n \}$ the standard basis of $X$.
For $x\in X$, with $\snorm{x} = 1$, write $\ds x = \sum_{k=1}^n c_k \, e_k .$
Since $e_k \pd e_\ell = 0$ whenever $k\not=\ell$ and $e_k \pd e_k = 1$, then $c_k = x \pd e_k$ and by Cauchy-Schwarz
$\sabs{c_k}= $ $ \sabs{ x \pd e_k } $ $\leq \snorm{x} \, \snorm{e_k} $ $ = 1 . $
Theorem 11.4.2. Let $X$ and $Y$ be normed vector spaces. Suppose that $X$ is finite-dimensional. If $A \in L(X,Y)$, then $\snorm{A} \lt \infty$, and $A$ is uniformly continuous.
Proof. Then $\,\snorm{Ax} $ $= \ds \left|\left|\sum_{k=1}^n c_k \, Ae_k\right|\right|$ $ \ds \leq \sum_{k=1}^n \sabs{c_k} \, \snorm{Ae_k} $
$\qquad \qquad \qquad \Ra \,\snorm{Ax} $ $ \ds \leq \sum_{k=1}^n \snorm{Ae_k} . $
The right-hand side does not depend on $x.$
Thus we have found a finite upper bound for $\snorm{Ax}$ independent of $x,$ which means that $\snorm{A} \lt \infty$.
Theorem 11.4.2. Let $X$ and $Y$ be normed vector spaces. Suppose that $X$ is finite-dimensional. If $A \in L(X,Y)$, then $\snorm{A} \lt \infty$, and $A$ is uniformly continuous.
Proof. So we know that $\snorm{A} \lt \infty.$ Using this fact we can prove that $A$ is uniformly continuous.
For any normed vector spaces $X$ and $Y,$ and $A \in L(X,Y).$ For $v,w \in X,$
$\snorm{Av - Aw}$ $= \ds \snorm{A(v-w)} $ $ \ds \leq \snorm{A} \, \snorm{v-w} . $
Since $\snorm{A} \lt \infty$, then $A$ is Lipschitz continuous, with constant $K = \snorm{A}$.
Therefore, $A$ is uniformly continuous. $\;\bs $
Operator norm
Theorem 11.4.3. Let $X,Y,$ and $Z$ be finite-dimensional normed vector spaces.
Proof. 📝 👀 Complementary reading 📖
Consider the vector space $M_{m\times n}$ consisting of all $m\times n$ matrices, and let $X$ and $Y$ be vector spaces.
If $\{x_1,\ldots,x_n\}$ is a basis of $X$ and $\{y_1,\ldots,y_m\}$ is a basis of $Y,$ then for each $A\in L(X,Y)$, we have a matrix $\mathcal M(A)\in M_{m\times n}.$ In other words, once bases have been fixed for $X$ and $Y,$ $\mathcal M$ becomes a linear mapping from $L(X,Y)$ to $M_{m\times n}.$
Moreover $\mathcal M$ is bijection between $L(X,Y)$ and $M_{m\times n}.$
Moreover $\mathcal M$ is bijection between $L(X,Y)$ and $M_{m\times n}.$
Let $A,B\in M_{n\times n}.$ Some important facts about determinants:
Let $A,B\in M_{n\times n}.$ Some important facts about determinants:
Remark: The determinant is number assigned to square matrices that measures how the corresponding linear mapping stretches the space. In particular, this number, can be used to test for invertibility of a matrix.
Source: SCiMS
Source: Matrix transformations
$ \ds f(x,y)=\begin{bmatrix} a & b\\ c & d \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} e \\ f \end{bmatrix} $ $ \ds f_1(x,y)=\begin{bmatrix} 0.00 & 0.00 \\ 0.00 & 0.16 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} $ $ \ds f_2(x,y)=\begin{bmatrix} 0.85 & 0.04 \\ -0.04 & 0.85 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} 0.00 \\ 1.60 \end{bmatrix} $ $ \ds f_3(x,y)=\begin{bmatrix} 0.20 & -0.26 \\ 0.23 & 0.22 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} 0.00 \\ 1.60 \end{bmatrix} $ $ \ds f_4(x,y)=\begin{bmatrix} -0.15 & 0.28 \\ 0.26 & 0.24 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} 0.00 \\ 0.44 \end{bmatrix} $ |
|
Click on Play
. Explore: Modify the code and click again on
Play
to see the changes. Have fun! 🌿 🤓