The following property of the Gaussian probability density functions
(pdfs) is often used in this paper, here we state it in a form of a
theorem:
Theorem 1
Let
x
m and
p(
x) zero-mean Gaussian pdf with
covariance
![$ \Sigma$](img62.png)
= {
![$ \Sigma_{ij}^{}$](img812.png)
} (
i,
j from 1 to
m). If
g :
m
![$ \mathbb {R}$](img6.png)
is a differentiable function
not growing faster than a polynomial and with partial derivatives
g(
x) =
g(
x) ,
then
dxp(x) xig(x) = ![$\displaystyle \sum_{j=1}^{m}$](img422.png) ![$\displaystyle \Sigma_{ij}^{}$](img816.png) dxp(x) g(x) . |
(185) |
In the following we will assume definite integration over
m whenever the integral appears. Alternatively, using the vector
notation, the above identity reads:
dxp(x) xg(x) = ![$\displaystyle \Sigma$](img48.png) dxp(x) g(x) |
(186) |
For a general Gaussian pdf with mean
![$ \mu$](img29.png)
the above equation
transforms to:
dxp(x) xg(x) = ![$\displaystyle \mu$](img25.png) dxp(x) g(x) + ![$\displaystyle \Sigma$](img48.png) dxp(x) g(x) |
(187) |
Proof.
The proof uses the partial integration rule:
dxp(x) g(x) = - dxg(x) p(x) |
|
where we have used the fast decay of the Gaussian function to
dismiss one of the terms.
Using the derivative of a Gaussian pdf. we have:
dxp(x) g(x) = dx g(x) xp(x) |
|
Multiplying both sides with
![$ \Sigma$](img62.png)
leads to
eq. (
186), completing the proof. For the nonzero mean
the deductions are also straightforward.