B. Multivariate Bernstein Polynomials
Here we give the proofs of the theorems in
Section 7.4.
Theorem B..1
Let
![$ f: I:=[0,1]\times[0,1]\to\mathbb{R}$](img103.png)
be a continuous function. Then
the two-dimensional Bernstein polynomials
converge pointwise to
![$ f$](img51.png)
for
![$ n_1,n_2\to\infty$](img105.png)
.
Proof.
Let
![$ (x_1,x_2)\in I$](img772.png)
be a fixed point. Because of
Theorem
7.2 we have
for all
![$ n_1\ge N_1(\epsilon)$](img777.png)
and
![$ n_2\ge N_2(\epsilon)$](img778.png)
. The
second summand is smaller than
![$ \epsilon$](img483.png)
for
![$ n_1\ge N_1(\epsilon)$](img777.png)
because
is the Bernstein polynomial for
![$ f(.,y)$](img780.png)
, and the first summand is
smaller than
![$ \epsilon$](img483.png)
for
![$ n_2\ge N_2(\epsilon)$](img778.png)
because
![$ B_{f,n_1,n_2}(x,y)$](img781.png)
is the (one-dimensional) Bernstein polynomial
for
![$ B_{f(.,y),n_1}(x)$](img782.png)
. Q.E.D.
Definition B..2 (Multivariate Bernstein Polynomials)
Let
![$ n_1,\ldots,n_m\in\mathbb{N}$](img106.png)
and
![$ f$](img51.png)
be a function of
![$ m$](img107.png)
variables.
The polynomials
are called the multivariate Bernstein polynomials of
![$ f$](img51.png)
.
Theorem B..3 (Pointwise Convergence)
Let
![$ f: [0,1]^m\to\mathbb{R}$](img110.png)
be a continuous function. Then the
multivariate Bernstein polynomials
![$ B_{f,n_1,\ldots,n_m}$](img109.png)
converge
pointwise to
![$ f$](img51.png)
for
![$ n_1,\ldots,n_m\to\infty$](img111.png)
.
Proof.
By applying Theorem
7.2 to each summand in
we see that given an
![$ \epsilon>0$](img756.png)
there are
![$ N_1(\epsilon)$](img787.png)
,...,
![$ N_m(\epsilon)$](img788.png)
such that
for all
![$ n_i\ge\max(N_1(\epsilon),\ldots,N_m(\epsilon))$](img790.png)
. Q.E.D.
Theorem B..5 (Uniform Convergence)
Let
![$ f: [0,1]^m\to\mathbb{R}$](img110.png)
be a continuous function. Then the
multivariate Bernstein polynomials
![$ B_{f,n_1,\ldots,n_m}$](img109.png)
converge
uniformly to
![$ f$](img51.png)
for
![$ n_1,\ldots,n_m\to\infty$](img111.png)
.
Proof.
We first note that because of the uniform continuity of
![$ f$](img51.png)
on
![$ I:=[0,1]^m$](img791.png)
we have
Given an
![$ \epsilon>0$](img756.png)
, we can find such a
![$ \delta$](img793.png)
. In order to
simplify notation we set
and
![$ k:=\left( {k_1\over n_1}, \ldots, {k_m\over n_m} \right)$](img795.png)
.
![$ x$](img36.png)
always lies in
![$ I$](img76.png)
. We have to estimate
and to that end we split the sum into two parts, namely
where
![$ \sum\nolimits'$](img798.png)
means summation over all
![$ k_j$](img799.png)
with
![$ {0\le k_j\le
n_j}$](img800.png)
(where
![$ j\in\{1,\ldots,m\}$](img801.png)
) and
![$ \Vert k - x \Vert _2 \ge \delta$](img802.png)
, and
where
![$ \sum\nolimits''$](img804.png)
means summation over the remaining terms.
For
![$ S_2$](img805.png)
we have
We will now estimate
![$ S_1$](img807.png)
. In the sum
![$ S_1$](img807.png)
the inequality
![$ \Vert k - x \Vert _2 \ge \delta$](img802.png)
holds, i.e.,
Hence at least one of the summands on the left hand side is greater
equal
![$ \delta^2/m$](img809.png)
. Without loss of generality we can assume this
is the case for the first summand:
Thus, using Lemma
B.4,
We can now estimate
![$ S_1$](img807.png)
. Since
![$ f$](img51.png)
is continuous on a compact set
![$ M:=\max_{x\in I} \vert f(x)\vert$](img816.png)
exists.
For
![$ n_1$](img821.png)
large enough we have
![$ Mm / 2\delta^2n_1 < \epsilon/2$](img822.png)
and thus
which completes the proof. Q.E.D.
A reformulation of this fact is the following corollary.
Corollary B..6
The set of all polynomials is dense in
![$ C([0,1]^m)$](img117.png)
.
Theorem B..7 (Error Bound for Lipschitz Condition)
If
![$ f: I:=[0,1]^m\to\mathbb{R}$](img118.png)
is a continuous function satisfying the
Lipschitz condition
on
![$ I$](img76.png)
, then the inequality
holds.
Proof.
Abbreviating notation we set
![$ k:=\left( {k_1\over n_1}, \ldots, {k_m\over n_m} \right)$](img795.png)
. We will use the Lipschitz condition,
Corollary
A.7, and
Lemma
B.4.
This completes the proof. Q.E.D.
Theorem B..8 (Asymptotic Formula)
Let
![$ f: I:=[0,1]^m\to\mathbb{R}$](img118.png)
be a
![$ C^2$](img121.png)
function and
![$ x\in I$](img122.png)
, then
Proof.
We define the vector
![$ h$](img767.png)
through
![$ h_j:=k_j/n-x_j$](img832.png)
, where the
![$ k_j$](img799.png)
are the integers over which we sum in
![$ B_{f,n,\ldots,n}$](img833.png)
. Using
Theorem
A.14 we see
where
![$ \lim_{h\to0} \rho(h)=0$](img837.png)
. Summing this equation like the sum
in
![$ B_{f,n,\ldots,n}$](img833.png)
we obtain
since many terms vanish or can be summed because of
Lemma
B.4. Noting
![$ \lim_{h\to0} \rho(h)=0$](img837.png)
we can
apply the same technique as in the proof of
Theorem
B.5 for estimating the last sum
in the last equation, i.e., splitting the sum into two parts for
![$ \Vert h\Vert\ge\delta$](img839.png)
and
![$ \Vert h\Vert<\delta$](img840.png)
. Hence we see that for all
![$ \epsilon$](img483.png)
this sum is less equal
![$ \epsilon/n$](img841.png)
for all sufficiently
large
![$ n$](img60.png)
, which yields the claim. Q.E.D.
Clemens Heitzinger
2003-05-08