\documentclass[graybox,envcountchap,sectrefs]{svmono}
\usepackage{amssymb,amsmath}
\usepackage{amsfonts}
\usepackage{epsfig,wrapfig}
\usepackage{mathptmx}
\usepackage{helvet}
\usepackage{courier}
\usepackage{type1cm}
\usepackage{makeidx}
\usepackage{multicol}
\makeindex
%\font\script=cmcsc10
%\font\script =cmcsc10
%\font\kisit=cmti8
\font\BBB=msbm10
\makeatletter
\renewcommand{\theenumi}{\alph{enumi}}
\renewcommand{\labelenumi}{\theenumi)}
\makeatother
\begin{document}
\author{P\'eter Major}
\title{MULTIPLE WIENER--IT\^O INTEGRALS \\
with applications to limit theorems}
\subtitle{-- Lecture Note -- 849}
%\subtitle{-- Monograph --}
\maketitle
\frontmatter
\tableofcontents
\preface
One of the most important problems in probability theory is the
investigation of the limit distribution of partial sums of
appropriately normalized random variables. The case where the random
variables are independent is fairly well understood. Many results are
known also in the case where independence is replaced by an appropriate
mixing condition or some other ``almost independence'' property. Much
less is known about the limit behaviour of partial sums of really
dependent random variables. On the other hand, this case is becoming
more and more important, not only in probability theory, but also in
some applications in statistical physics.
The problem about the asymptotic behaviour of partial sums of dependent
random variables leads to the investigation of some very complicated
transformations of probability measures. The classical methods of
probability theory do not seem to work for this problem. On the other
hand, although we are still very far from a satisfactory solution
of this problem, we can already present some nontrivial results.
The so-called multiple Wiener--It\^o integrals have proved to be a
very useful tool in the investigation of this problem. The proofs of
almost all rigorous results in this field are closely related to this
technique. The notion of multiple Wiener--It\^o integrals was worked
out for the investigation of non-linear functionals over Gaussian fields.
It is closely related to the so-called Wick polynomials which can be
considered as the multi-dimensional generalization of Hermite polynomials.
The notion of Wick polynomials and multiple Wiener--It\^o integrals
were worked out at the same time and independently of each other.
Actually, we discuss a modified version of the multiple Wiener--It\^o
integrals in greatest detail. The technical changes needed in the
definition of these modified integrals are not essential. On the other
hand, these modified integrals are more appropriate for certain
investigations, since they enable us to describe the action of shift
transformations and to apply some sort of random Fourier analysis.
There is also some connection between multiple Wiener--It\^o integrals
and the classical stochastic It\^o integrals. The main difference
between them is that in the first case deterministic functions are
integrated, and in the second case so-called non-anticipating
functionals. The consequence of this difference is that no technical
difficulty arises when we want to define multiple Wiener--It\^o
integrals in the multi-dimensional time case. On the other hand,
a large class of nonlinear functionals over Gaussian fields can be
represented by means of multiple Wiener--It\^o integrals.
In this work we are interested in limit problems for sums of dependent
random variables. It is useful to consider this problem together with
its continuous time version. The natural formulation of the continuous
time version of this problem can be given by means of generalized fields.
Consequently we also have to discuss some questions about generalized
fields.
I have not tried to formulate all the results in the most general form.
My main goal was to work out the most important techniques needed in
the investigation of such problems. This is the reason why the greatest
part of this work deals with multiple Wiener--It\^o integrals. I have
tried to give a self-contained exposition of this subject and also
to explain the motivation behind the results.
I had the opportunity to participate in the Dobrushin--Sinai seminar
in Moscow. What I learned there was very useful also for the
preparation of this Lecture Note. Therefore I would like to thank
the members of this seminar for what I could learn from them, especially
P.~M.~Bleher, R.~L.~Dobrushin and Ya.~G.~Sinai.
\medskip\noindent
{\it Some additional remarks.}
\medskip\noindent
This text is a slightly modified version of my Lecture Note {\it
Multiple Wiener--It\^o integrals with applications to limit theorems}\/
published in the {\it Lecture Notes in Mathematics}\/ series
(number~849) of the Springer Verlag in~1981. I decided to make a
special lecture on the basis of this work in the first semester of
the university course in~2011--2012 at the University of~Szeged.
Preparing for it I observed how difficult the reading of formulas
in this Lecture Note is. These difficulties arose because this
Lecture Note was written at the time when the \TeX{} program still
did not exist, and the highest technical level of typing was
writing on an IBM machine that enabled one to type beside the usual
text also mathematical formulas. But the texts written in such a
way are very hard to read. To make my text more readable I decided
to retype it by means of the \TeX{} program. This demanded some
changes. It implied e.g.\ to follow such partly typographical
partly linguistic rules by which one does not start a sentence
with a formula. Besides, it suggested to formulate the basic
definitions in a (typographically) more explicit form and not as an
explanation inside the text. When typing this work I also tried to
rethink what I had written, to correct the errors and to make the
proofs more understandable. It was surprising and a little bit
shocking to meet my old personality by studying my old Lecture
Note and to recognize how much I have changed. Now I would expose
many details in a different way. Naturally I would also make many
changes by taking into account the results proved since the time I
wrote this note. Nevertheless I decided to make no essential
changes in the text, to restrict myself to the correction of the
errors I found, and to give a more detailed explanation of the
proofs where I felt that it is useful. (There were many such places.)
In doing so I was influenced by a Russian proverb which says:
`Luchshe vrag khoroshego'. I tried to follow the advice of this
proverb. (I do not know of an English counterpart of it, but it has
a French version: `Le mieux est l'ennemi du bien'.)
I made only one exception. I decided to explain those basic notions
and results in the theory of generalized functions which were
applied in the older version of this work in an implicit way. In
particular, I tried to explain how one gets with the help of this
theory those results about the so-called spectral representation
of the covariance function of stationary random fields that I
have formulated under the name {\it Bochner's theorem}\/ and
{\it Bochner--Schwartz theorem.} This extension of the text is
contained in the attachments to Chapter~1 and~3. In the original
version I only referred to a work where these notions and results
can be found. But now I found such an approach not satisfactory,
because these notions and results play an important role in some
arguments of this work. Hence I felt that to make a self-contained
presentation of the subject I have to explain them in more detail.
\bigskip\noindent
Budapest, 15 August 2011
\medskip
\hskip8truecm P\'eter Major
\mainmatter
\chapter{On a limit problem}
We begin with the formulation of a problem which is important
both for probability theory and statistical physics. The multiple
Wiener--It\^o integral proved to be a very useful tool at the
investigation of this problem.
We shall consider a set of random variables $\xi_n$,
$n\in \textrm{\BBB Z}_\nu$, where $\textrm{\BBB Z}_\nu$ denotes
the $\nu$-dimensional integer lattice, and we shall study
their properties. Such a set of random variables will be called
a ($\nu$-dimensional) discrete random field.
\index{discrete random field}
We shall be mainly interested in so-called stationary random
fields. Let us recall their definition.
\medskip\noindent
{\bf Definition of discrete (strictly) stationary random fields.}
\index{stationary (discrete) random fields}
{\it A set of random variables $\xi_n$, $n\in\textrm{\BBB Z}_\nu$,
is called a (strictly) stationary discrete random field if
$$(
\xi_{n_1},\dots,\xi_{n_k})\stackrel{\Delta}{=}
(\xi_{n_1+m},\dots,\xi_{n_k+m})
$$
for all $k=1,2,\dots$ and \ $n_1,\dots,n_k,\;m\in\textrm{\BBB Z}_\nu$,
where $\stackrel{\Delta}{=}$ denotes equality in distribution.}
\medskip
Let us also recall that a discrete random field $\xi_n$,
$n\in\textrm{\BBB Z}_\nu$, is called Gaussian if for every finite
subset $\{n_1,\dots,n_k\}\subset\textrm{\BBB Z}_\nu$ the random
vector $(\xi_{n_1},\dots,\xi_{n_k})$ is normally distributed.
Given a discrete random field $\xi_n$, $n\in\textrm{\BBB Z}_\nu$, we
define for all $N=1,2,\dots$ the new random fields
\begin{equation}
Z_n^N=A_N^{-1}\sum_{j\in B_n^N}\xi_j, \qquad N=1,2,\dots, \quad
n\in\textrm{\BBB Z}_\nu,
\label{(1.1)}
\end{equation}
where
$$
B_n^N=\{j\colon\; j\in \textrm{\BBB Z}_\nu,\quad n^{(i)}N\le
j^{(i)}<(n^{(i)}+1)N,\;i=1,2,\dots, \nu\},
$$
and $A_N$, $A_N>0$, is an appropriate norming constant. The superscript
$i$ denotes the $i$-th coordinate of a vector in this formula. We are
interested in the question when the finite dimensional distribution of
the random fields $Z_n^N$ defined in~(\ref{(1.1)}) have a limit as
$N\to\infty$.
In particular, we would like to describe those random fields $Z_n^*$,
$n\in\textrm{\BBB Z}_\nu$, which appear as the limit of such random
fields~$Z_n^N$. This problem led to the introduction of the following
notion.
\medskip\noindent
{\bf Definition of self-similar (discrete) random fields.}
\index{self-similar (discrete) random field}
\index{self-similarity parameter}
{\it A (discrete) random field
$\xi_n$, $n\in\textrm{\BBB Z}_\nu$, is called self-similar
with self-similarity parameter~$\alpha$ if the random
fields $Z^N_n$ defined in~(\ref{(1.1)}) with their help
and the choice $A_N=N^\alpha$ satisfy the relation
\begin{equation}
(\xi_{n_1},\dots,\xi_{n_k})\stackrel{\Delta}{=}(Z^N_{n_1},\dots,Z_{n_k}^N)
\label{(1.2)}
\end{equation}
for all $N=1,2,\dots$ and $n_1,\dots,n_k\in\textrm{\BBB Z}_\nu$.}
\medskip
We are interested in the choice $A_N=N^\alpha$ with some
$\alpha>0$ in the definition of the random variables~$Z^N_n$
in~(\ref{(1.2)}), because under slight restrictions,
relation~(\ref{(1.2)}) can be satisfied only with
such norming constants $A_N$. A central problem both in statistical
physics and in probability theory is the description of self-similar
fields. We are interested in self-similar fields whose random
variables have a finite second moment. This excludes the fields
consisting of i.i.d. random variables with a non--Gaussian stable law.
The Gaussian self-similar fields and their Gaussian range of attraction
are fairly well known. Much less is known about the non-Gaussian case.
The problem is hard, because the transformations of measures over
$R^{\textrm{\BBB Z}_\nu}$ induced by formula~(\ref{(1.1)}) have a very
complicated structure. We shall define the so-called subordinated
fields below. (More precisely the fields subordinated to a stationary
Gaussian field.) In case of subordinated fields the Wiener--It\^o
integral is a very useful tool for investigating the transformation
defined in~(\ref{(1.1)}). In particular, it enables us to construct
non--Gaussian self-similar fields and to prove non-trivial limit
theorems. All known results are closely related to this technique.
Let $X_n$, $n\in\textrm{\BBB Z}_\nu$, be a stationary Gaussian
field. We define the shift transformations
$T_m$, $m\in\textrm{\BBB Z}_\nu$, over this field by the formula
$T_mX_n=X_{n+m}$ for all $n,\,m\in\textrm{\BBB Z}_\nu$. Let
${\cal H}$ denote the {\it real} Hilbert space consisting of
the square integrable random variables measurable with respect
to the $\sigma$-algebra
${\cal B}={\cal B}(X_n,\;n\in\textrm{\BBB Z}_\nu)$. The scalar
product in ${\cal H}$ is defined as $(\xi,\eta)=E\xi\eta$,
$\xi,\,\eta\in{\cal H}$. The shift transformations $T_m$,
$m\in\textrm{\BBB Z}_\nu$, can be extended to a group of
unitary shift transformations over ${\cal H}$ in a natural way.
Namely, if $\xi=f(X_{n_1},\dots,X_{n_k})$ then we define
$T_m\xi=f(X_{n_1+m},\dots,X_{n_k+m})$. It can be seen that
$\|\xi\|=\|T_m\xi\|$, and the above considered random variables
$\xi$ are dense in ${\cal H}$.
\index{shift transformation (discrete random field)}
(A more detailed discussion about the definition of shift
operators and their properties will be given in Chapter~2 in
a {\it Remark}\/ after the formulation of Theorem~2C. Here
we shall define the shift $T_m\xi$, $m\in\textrm{\BBB Z}_\nu$,
of all random variables $\xi$ which are measurable with respect
to the $\sigma$-algebra ${\cal B}(X_n,\,n\in{\BBB Z}_\nu)$, i.e.
$\xi$ does not have to be square integrable.) Hence $\|T_m\|$
can be extended to the whole space ${\cal H}$ by $L_2$
continuity. It can be proved that the norm preserving
transformations $T_m$, $m\in\textrm{\BBB Z}_\nu$, constitute a
unitary group in ${\cal H}$, i.e. $T_{n+m}=T_nT_m$ for all
$n,\,m\in\textrm{\BBB Z}_\nu$, and $T_0=\textrm{Id}$. Now we
introduce the following
\medskip\noindent
{\bf Definition of subordinated random fields.}
\index{random field subordinated to a stationary Gaussian
random field (discrete case)}
{\it Given a stationary Gaussian field
$X_n$, $n\in\textrm{\BBB Z}_\nu$, we define the Hilbert spaces
${\cal H}$ and the shift transformations $T_m$,
$m\in\textrm{\BBB Z}_\nu$, over ${\cal H}$ as before. A
discrete stationary field $\xi_n$ is called a random field
subordinated to $X_n$ if $\xi_n\in{\cal H}$, and
$T_n\xi_m=\xi_{n+m}$ for all $n,\,m\in\textrm{\BBB Z}_\nu$.}
\medskip
We remark that $\xi_0$ determines the subordinated fields
$\xi_n$ completely, since $\xi_n=T_n\xi_0$. Later we give a
more adequate description of subordinated fields by means of
Wiener--It\^o integrals. Before working out the details we
formulate the continuous time version of the above notions
and problems. In the continuous time case it is more natural
to consider generalized random fields. To explain the idea
behind such an approach we shortly explain a different but
equivalent description of discrete random fields. We present
them as an appropriate set of random variables indexed by
the elements of a linear space. This shows some similarity
with the generalized random fields to be defined later.
Let $\varphi_n(x)$, $n\in\textrm{\BBB Z}_\nu$,
$n=(n_1,\dots,n_\nu)$, denote the indicator function of the
cube
$[n_1-\frac12,n_1+\frac12)\times\cdots\times[n_\nu-\frac12,n_\nu+\frac12)$,
with center $n=(n_1,\dots,n_\nu)$ and with edges of length~1,
i.e.\ let $\varphi_n(x)=1$, $x=(x_1,\dots,x_\nu)\in R^\nu$, if
$n_j-\frac12\le x_j0$. Observe that
$\xi(\varphi^{(N,A_N)}_n)=Z^N_n$ with the random variable
$Z^N_n$ defined in~(\ref{(1.1)}). All previously
introduced notions related to discrete random fields can be
reformulated with the help of the set of random variables
$\xi(\varphi)$, $\varphi\in\Phi$. Thus for instance the random
field $\xi_n$, $n\in\textrm{\BBB Z}_\nu$ is self-similar with
self-similarity parameter~$\alpha$ if and only if
$\xi(\varphi^{(N,N^\alpha)})\stackrel{\Delta}{=}\xi(\varphi)$
for all $\varphi\in\Phi$ and $N=1,2,\dots$. (To see why this
statement holds observe that the distributions of two random
vectors agree if and only if every linear combination of their
coordinates have the same distribution. This follows from the fact
that the characteristic function of a random vector determines its
distribution.)
It will be useful to define the continuous time version of
discrete random fields as generalized random fields. The generalized
random fields will be defined as a set of random variables indexed
by the elements of a linear space of functions. They show some
similarity to the class of random variables $\xi(\varphi)$,
$\varphi\in\Phi$, defined above. The main difference is that instead
of the space~$\Phi$ a different linear space is chosen for the
parameter set of the random field. We shall choose the so-called
Schwartz space for this role.
Let ${\cal S}={\cal S}_\nu$ be the Schwartz space of (real valued)
rapidly decreasing, smooth functions on $R^\nu$. (See
e.g.~\cite{r15} for the definition of ${\cal S}_\nu$.
\index{space of test functions in the Schwartz space of
generalized functions}
I shall present a more detailed discussion
about the definition of the space ${\cal S}$ in the adjustment to
Chapter~1.) Generally one takes the space of complex valued, rapidly
decreasing, smooth functions as the space~${\cal S}$, but we shall
denote the space of {\it real valued},\/ rapidly decreasing, smooth
functions by~${\cal S}$ if we do not say this otherwise. We shall omit
the subscript $\nu$ if it leads to no ambiguity. Now we introduce the
notion of generalized random fields.
\medskip\noindent
{\bf Definition of generalized random fields.}
\index{generalized random field}
{\it We say that the set of random variables $X(\varphi)$,
$\varphi\in{\cal S}$, is a generalized random field over the
Schwartz space ${\cal S}$ of rapidly decreasing, smooth functions if:
\medskip
\begin{description}
\item[(a)] $X(a_1\varphi_1+a_2\varphi_2)=a_1X(\varphi_1)+a_2X(\varphi_2)$
with probability 1 for all real numbers $a_1$ and $a_2$ and
$\varphi_1\in{\cal S}$, $\varphi_2\in{\cal S}$. (The exceptional set of
probability~0 where this identity does not hold may depend on $a_1$,
$a_2$, $\varphi_1$ and $\varphi_2$.)
\item[(b)] $X(\varphi_n)\Rightarrow X(\varphi)$ stochastically if
$\varphi_n\to\varphi$ in the topology of ${\cal S}$.
\end{description}
}
\medskip
We also introduce the following definitions.
\medskip\noindent
{\bf Definition of stationarity and Gaussian property of a
generalized random field. On the notion of convergence of
generalized random fields in distribution.}
\index{Gaussian random field (generalized field)}
\index{stationary random field (generalized field)}
\index{convergence of generalized random fields in distribution}
{\it The generalized random field
$X=\{X(\varphi),\,\varphi\in {\cal S}\}$ is stationary if
$X(\varphi)\stackrel{\Delta}{=}X(T_t\varphi)$ for all
$\varphi\in{\cal S}$ and $t\in R^\nu$, where
$T_t\varphi(x)=\varphi(x-t)$. It is Gaussian
if $X(\varphi)$ is a Gaussian random variable for all
$\varphi\in{\cal S}$. The relation
$X_n\stackrel{{\cal D}}{\rightarrow} X_0$ as $n\to\infty$
holds for a sequence of generalized random fields $X_n$,
$n=0,1,2,\dots$, if
$X_n(\varphi)\stackrel{{\cal D}}{\rightarrow} X_0(\varphi)$
for all $\varphi\in{\cal S}$, where
$\stackrel{{\cal D}}{\rightarrow}$ denotes convergence in
distribution.}
\medskip
Given a stationary generalized random field $X$ and a function
$A(t)>0$, $t>0$, on the set of positive real numbers we define the
(stationary) random fields $X^A_t$ for all $t>0$ by the formula
\begin{equation}
X^A_t(\varphi)=X(\varphi_t^A), \quad \varphi\in{\cal S}, \qquad
\textrm{where } \varphi_t^A(x)=A(t)^{-1}\varphi\left(\frac xt\right).
\label{(1.3)}
\end{equation}
We are interested in the following
\medskip\noindent
{\bf Question.} {\it When does a generalized random field $X^*$
exist such that $X_t^A\stackrel{{\cal D}}{\rightarrow} X^*$ as
$t\to\infty$ (or as $t\to0$)?}
\medskip
In relation to this question we introduce the following
\medskip\noindent
{\bf Definition of self-similarity.}
\index{self-similar random field (generalized field)}
\index{self-similarity parameter}
{\it The stationary generalized random field $X$ is self-similar
with self-similarity parameter $\alpha$ if
$X^A_t(\varphi)\stackrel{\Delta}{=} X(\varphi)$ for all
$\varphi\in{\cal S}$ and $t>0$ with the function
$A(t)=t^\alpha$.}
\medskip
To answer the above question one should first describe the
generalized self-similar random fields.
We try to explain the motivation behind the above definitions.
Given an ordinary random field $X(t)$, $t\in R^\nu$, and a
topological space ${\cal E}$ consisting of functions over
$R^\nu$ one can define the random variables
$X(\varphi)=\int_{R^\nu} \varphi(t)X(t)\,dt$,
$\varphi\in{\cal E}$. Some difficulty may arise when defining
this integral, but it can be overcome in all interesting cases.
If the space ${\cal E}$ is rich enough, and this is the case
if ${\cal E}={\cal S}$, then the integrals $X(\varphi)$,
$\varphi\in{\cal E}$, determine the random process $X(t)$. The
set of random variables $X(\varphi)$, $\varphi\in{\cal S}$, is
a generalized random field in all nice cases. On the other hand,
there are generalized random fields which cannot be obtained by
integrating ordinary random fields. In particular, the
generalized self-similar random fields we shall construct later
cannot be interpreted through ordinary fields. The above
definitions of various properties of generalized fields are
fairly natural, considering what these definitions mean for
generalized random fields obtained by integrating ordinary fields.
The investigation of generalized random fields is simpler than that
of ordinary discrete random fields, because in the continuous case
more symmetry is available. Moreover, in the study or construction
of discrete random fields generalized random fields may play a useful
role. To understand this let us remark that if we have a generalized
random field $X(\varphi)$, $\varphi\in{\cal S}$, and we can extend the
space ${\cal S}$ containing the test function $\varphi$ to such a larger
linear space ${\cal T}$ for which $\Phi\subset{\cal T}$ with the above
introduced linear space~$\Phi$, then we can define the discrete random
field $X(\varphi)$, $\varphi\in\Phi$, by a restriction of the space of
test functions of the generalized random field $X(\varphi)$,
$\varphi\in{\cal T}$. This random field can be considered as the
discretization of the original generalized random field
$X(\varphi)$, $\varphi\in{\cal S}$.
We finish this chapter by defining the generalized subordinated
random fields. The we shall explain the basic results about
the Schwartz space ${\cal S}$ and generalized functions in
a separate sub chapter.
Let $X(\varphi)$, $\varphi\in{\cal S}$, be a generalized
stationary Gaussian random field. The formula
$T_tX(\varphi))=X(T_t\varphi)$, $T_t\varphi(x)=\varphi(x-t)$,
defines the shift transformation for all $t\in R^\nu$. Let
${\cal H}$ denote the real Hilbert space consisting of the
${\cal B}={\cal B}(X(\varphi),\;\varphi\in{\cal S})$ measurable
random variables with finite second moment. The shift
transformation can be extended to a group of unitary
transformations over ${\cal H}$ similarly to the discrete case.
\medskip\noindent
{\bf Definition of generalized random fields subordinated to a
generalized stationary Gaussian random field.}
\index{random field (generalized) subordinated to a generalized
stationary Gaussian random field}
{\it Given a generalized stationary Gaussian random field
$X(\varphi)$, $\varphi\in{\cal S}$, we define the Hilbert
space ${\cal H}$ and the shift transformations $T_t$,
$t\in R^\nu$, over ${\cal H}$ as above. A generalized
stationary random field $\xi(\varphi)$, $\varphi\in{\cal S}$,
is subordinated to the field $X(\varphi)$,
$\varphi\in{\cal S}$, if $\xi(\varphi)\in{\cal H}$ and
$T_t\xi(\varphi)=\xi(T_t\varphi)$ for all $\varphi\in{\cal S}$
and $t\in R^\nu$, and $E[\xi\varphi_n)-\xi(\varphi)]^2\to0$ if
$\varphi_n\to\varphi$ in the topology of ${\cal S}$.}
\section{A brief overview about some results on
generalized functions}
Let us first describe the Schwartz spaces ${\cal S}$ and
${\cal S}^c$ in more detail. The space
${\cal S}^c=({\cal S}_\nu)^c$ consists of those complex valued
functions of $\nu$ variables which decrease at infinity,
together with their derivatives, faster than any polynomial
degree. More explicitly, $\varphi\in{\cal S}^c$ for a
complex valued function $\varphi$ of $\nu$ variables if
$$
\left|x_1^{k_1}\cdots x_\nu^{k_\nu}\frac{\partial^{q_1+\cdots+q_\nu}}
{\partial x_1^{q_1}\dots \partial x_\nu^{q_\nu}}
\varphi(x_1,\dots,x_\nu)\right|
\le C(k_1,\dots,k_\nu,q_1,\dots,q_\nu)
$$
for all point $x=(x_1,\dots,x_\nu)\in R^\nu$ and vectors
$(k_1,\dots,k_\nu)$, $(q_1,\dots,q_\nu)$ with non-negative
integer coordinates with some constant
$C(k_1,\dots,k_\nu,q_1,\dots,q_\nu)$ which may depend on the
function~$\varphi$. This formula can be written in a more
concise form as
$$
|x^k D^q\varphi(x)|\le C(k,q) \quad \textrm{with }
k=(k_1,\dots,k_\nu) \textrm{ and } q=(q_1,\dots,q_\nu),
$$
where $x=(x_1,\dots,x_\nu)$, $x^k=x_1^{k_1}\cdots x_\nu^{k_\nu}$
and $D^q=\frac{\partial^{q_1+\cdots+q_\nu}}
{\partial x_1^{q_1}\dots \partial x_\nu^{q_\nu}}$.
The elements of the space ${\cal S}$ are defined
similarly, with the only difference that they are real
valued functions.
To define the spaces ${\cal S}$ and ${\cal S}^c$ we still have
to define the convergence in them. We say that a sequence of
functions $\varphi_n\in{\cal S}^c$ (or $\varphi_n\in{\cal S}$)
converges to a function $\varphi$ if
$$
\lim_{n\to\infty}\sup_{x\in R^\nu}
(1+|x|^2)^k|D^q\varphi_n(x)-D^q\varphi(x)|=0.
$$
for all $k=1,2,\dots$ and $q=(q_1,\dots,q_\nu)$.
It can be seen that the limit function $\varphi$ is also in the
space~${\cal S}^c$ (or in the space ${\cal S}$).
A nice topology can be introduced in the space ${\cal S}^c$ (or
${\cal S}$) which induces the above convergence. The following
topology is an appropriate choice.
\index{space of test functions in the Schwartz space of
generalized functions}
Let a basis of neighbourhoods of the origin consist
of the sets
$$
U(k,q,\varepsilon)=\left\{\varphi\colon\;
\max_x(1+|x|^2)^k |D^q\varphi(x)|<\varepsilon\right\}
$$
with $k=0,1,2,\dots$, $q=(q_1,\dots,q_\nu)$ with non-negative
integer coordinates and $\varepsilon>0$, where
$|x|^2=x_1^2+\cdots+x_\nu^2$. A basis of neighbourhoods of an
arbitrary function $\varphi\in{\cal S}^c$ (or $\varphi\in{\cal S}$)
consists of sets of the form $\varphi+U(k,q,\varepsilon)$,
where the class of sets $U(k,q,\varepsilon)$ is a basis of
neighbourhood of the origin. The fact that the convergence in
${\cal S}$ has such a representation, (and a similar result
holds in some other spaces studied in the theory of
generalized functions) has a great importance in the theory
of generalized functions. We also have exploited this fact
in Chapter~6 of this Lecture Note. Topological
spaces with such a topology are called countably normed spaces.
The space of generalized functions ${\cal S}'$ consists of the
{\it continuous}\/ linear maps $F\colon\;{\cal S}\to C$ or
$F\colon\;{\cal S}^c\to C$, where $C$ denotes the linear space of
complex numbers. (In the study of the space ${\cal S}'$ we omit
the upper index~$c$, i.e. we do not indicate whether we are working in
real or complex space when this causes no problem.) We shall write the
map $F(\varphi)$, $F\in{\cal S}'$ and $\varphi \in{\cal S}$ (or
$\varphi\in{\cal S}^c$) in the form~$(F,\varphi)$.
\index{Schwartz space of generalized functions.}
We can define generalized functions $F\in{\cal S}'$ by the formula
$$
(F,\varphi)=\int \overline{f(x)}\varphi(x)\,dx \quad\textrm{for all }
\varphi\in{\cal S} \quad\textrm{ or }\varphi\in{\cal S}^c
$$
with a function $f$ such
that $\int(1+|x|^2)^{-p}|f(x)|\,dx<\infty$ with some $p\ge0$.
(The upper script \ $\bar{ }$ \ denotes complex conjugate in
the sequel.) Such functionals are called regular. There are
also non-regular functionals in the space~${\cal S}'$. An
example for them is the $\delta$-function defined by the
formula $(\delta ,\varphi)=\varphi(0)$. There is a good
description of the generalized functions $F\in{\cal S}'$, (see
the book I.~M.~Gelfand and G. E.~Shilov: Generalized functions,
Volume~2, Chapter~2, Chapter~4), but we do not need this result,
hence we do not discuss it here. Another important question in
this field that we omit is about the interpretation of a usual
function as a generalized function in the case when it does not
define a regular function in ${\cal S}'$ because of its strong
singularity in some points. In such cases some regularization
can be applied. It is an important problem in the theory of
generalized functions to find the appropriate generalized
functions in such cases, but it does not appear in the study
of the problems in this work.
The derivative and the Fourier transform of generalized functions are
also defined, and they play an important role in some investigations.
In the definition of these notions for generalized functions we want
to preserve the old definition if nice regular functionals are
considered for which these notions were already defined in classical
analysis. Such considerations lead to the definition
$(\frac{\partial_j}{\partial x_j}F,\varphi)
=-(F,\frac{\partial\varphi}{\partial x_j})$ of the derivative of
generalized functions. We do not discuss this definition in more
detail, because here we do not work with the derivatives of
generalized functions.
The Fourier transform of generalized functions in~$S'$ appears in
our discussion, although only in an implicit form. The
Bochner-Schwartz theorem discussed in Chapter~3 actually deals with
the Fourier transform of generalized functions. Hence the definition
of Fourier transform will be given in more detail.
We shall define the Fourier transform of a generalized function by
means of a natural extension of the Parseval formula, more explicitly
of a simplified version of it, where the same identity
$$
\int_{R^\nu} \overline{f(x)}g(x)\,dx
=\frac1{(2\pi)^\nu}\int_{R^\nu} \overline{\tilde f(u)}\tilde g(u)\,du
$$
is formulated with $\tilde f(u)=\int_{R^\nu}e^{i(u,x)}f(x)\,dx$
and $\tilde g(u)=\int_{R^\nu}e^{i(u,x)}g(x)\,dx$. But now we
consider a pair of functions $(f,g)$ with different properties.
We demand that $f$ should be an integrable function, and
$g\in{\cal S}^c$. (In the original version of the Parseval formula
both~$f$ and~$g$ are $L_2$ functions.)
The proof of this identity is simple. Indeed, since the function
$g\in{\cal S}^c$ can be calculated as the inverse Fourier transform
of its Fourier transform~$\tilde g\in{\cal S}^c$, i.e.\
$g(x)=\frac1{(2\pi)^\nu}\int e^{-i(u,x)}\tilde g(u)\,du$, we can
write
\begin{eqnarray*}
\int \overline{f(x)}g(x)\,dx
&=&\int \overline{f(x)}\left[\frac1{(2\pi)^\nu}\int e^{-i(u,x)}
\tilde g(u)\,du\right]\,dx\\
&=&\int\tilde g(u)\left[\frac1{(2\pi)^\nu}
\int\overline{ e^{i(u,x)}f(x)}\,dx\right]\,du \\
&=&\frac1{(2\pi)^\nu}\int \overline{\tilde f(u)}\tilde g(u)\,du.
\end{eqnarray*}
\index{Fourier transform of a generalized function}
Let us also remark that the Fourier transform $f\to\tilde f$ is a
bicontinuous map from ${\cal S}^c$ to~${\cal S}^c$. (This means
that this transformation is invertible, and both the Fourier
transform and its inverse are continuous maps from ${\cal S}^c$
to ${\cal S}^c$.) (The restriction of the Fourier transform to
the space ${\cal S}$ of real valued functions is a bicontinuous
map from ${\cal S}$ to the subspace of ${\cal S}^c$ consisting
of those functions $f\in{\cal S}^c$ for which
$f(-x)=\overline{f(x)}$ for all $x\in R^\nu$.)
The above results make natural the following definition of the Fourier
transform~$\tilde F$ of a generalized function $F\in{\cal S}'$.
$$
(\tilde F, \tilde\varphi)=(2\pi)^\nu(F,\varphi)
\quad\textrm{for all } \varphi\in{\cal S}^c.
$$
Indeed, if $F\in{\cal S}'$ then $\tilde F$ is also a continuous
linear map on ${\cal S}^c$, i.e. it is also an element
of~${\cal S}'$. Besides, the above proved version of the Parseval
formula implies that if we consider an integrable function~$f$
on~$R^\nu$ both as a usual function and as a (regular)
generalized function, its Fourier transform agrees in the two cases.
\medskip
There are other classes of test functions and spaces of generalized
functions studied in the literature. The most popular among them is
the space~${\cal D}$ of infinitely many differentiable functions with
compact support and its dual space~${\cal D}'$, the space of continuous
linear transformations on the space ${\cal D}$.
\index{generalized functions with the test function space of smooth
functions with compact support}
(These spaces are
generally denoted by~${\cal D}$ and~${\cal D}'$ in the literature,
although just the book~\cite{r15} that we use as our main reference
in this subject applies the notation ${\cal K}$ and~${\cal K}'$ for
them.) We shall discuss this space only very briefly.
The space ${\cal D}$ consists of the infinitely many times
differentiable functions with compact support. Thus it is a
subspace of~${\cal S}$. A sequence $\varphi_n\in{\cal D}$, $n=1,2,\dots$,
converges to a function~$\varphi$, if there is a compact set
$A\subset R^\nu$ which is the support of all these
functions~$\varphi_n$, and the functions $\varphi_n$ together with
all their derivatives converge uniformly to the
function~$\varphi$ and to its corresponding derivatives. It is not
difficult to see that also $\varphi\in{\cal D}$, and if the functions
$\varphi_n$ converge to $\varphi$ in the space~${\cal D}$, then they
also converge to~$\varphi$ in the space~${\cal S}$. Moreover, ${\cal D}$
is an everywhere dense subspace of~${\cal S}$. The space
${\cal D}'$ consists of the continuous linear functionals in~${\cal D}$.
The results describing the behaviour of ${\cal D}$ and ${\cal D}'$ are
very similar to those describing the behaviour of ${\cal S}$
and~${\cal S}'$. There is one difference that deserves some attention.
The Fourier transforms of the functions in ${\cal D}$ may not belong
to~${\cal D}$. The class of these Fourier transforms can be described
by means of some results in complex analysis. A topological
space~${\cal Z}$ can be defined on the set of Fourier transforms of
the functions from the space~${\cal D}$. If we want to apply Fourier
analysis in the space~${\cal D}$, then we also have to study this
space~${\cal Z}$ and its dual space~${\cal Z}'$. I omit the details.
\chapter{Wick polynomials}
In this chapter we consider the so-called Wick polynomials, a
multi-dimensional generalization of Hermite polynomials. They are
closely related to multiple Wiener--It\^o integrals.
Let $X_t$, $t\in T$, be a set of jointly Gaussian random
variables indexed by a parameter set $T$. Let $EX_t=0$ for all
$t\in T$. We define the real Hilbert spaces ${\cal H}_1$ and
${\cal H}$ in the following way: A square integrable random
variable is in ${\cal H}$ if and only if it is measurable with
respect to the $\sigma$-algebra ${\cal B}={\cal B}(X_t,\;t\in T)$,
and the scalar product in ${\cal H}$ is defined as
$(\xi,\eta)=E\xi\eta$, $\xi,\,\eta\in{\cal H}$. The Hilbert
space ${\cal H}_1\subset{\cal H}$ is the subspace of ${\cal H}$
generated by the finite linear combinations $\sum c_jX_{t_j}$,
$t_j\in T$. We consider only such sets of Gaussian random
variables $X_t$ for which ${\cal H}_1$ is separable. Otherwise
$X_t$, $t\in T$, can be arbitrary, but the most interesting case
for us is when $T={\cal S}_\nu$ or $\textrm{\BBB Z}_\nu$, and
$X_t$, $t\in T$, is a stationary Gaussian field.
Let $Y_1,Y_2,\dots$ be an orthonormal basis in ${\cal H}_1$. The
uncorrelated random variables $Y_1,Y_2,\dots$ are independent, since
they are (jointly) Gaussian. Moreover,
$$
{\cal B}(Y_1,Y_2,\dots)={\cal B}(X_t,\;t\in T).
$$
Let $H_n(x)$ denote the $n$-th Hermite polynomial with leading
coefficient~1, i.e. let
$H_n(x)=(-1)^ne^{x^2/2}\frac{d^n}{dx^n}(e^{-x^2/2})$.
\index{Hermite polynomials}
We recall the following results from analysis and measure theory.
\medskip\noindent
{\bf Theorem 2A.} {\it The Hermite polynomials $H_n(x)$, $n=0,1,2,\dots$,
form a complete orthogonal system in
$L_2\left(R,{\cal B},\frac1{\sqrt{2\pi}}e^{-x^2/2}\,dx\right)$.
(Here ${\cal B}$
denotes the Borel $\sigma$-algebra on the real line.)}
\medskip
Let $(X_j,{\cal X}_j,\mu_j)$, $j=1,2,\dots$, be countably many
independent copies of a probability space $(X,{\cal X},\mu)$.
(We denote the points of $X_j$ by $x_j$.) Let
$(X^\infty,{\cal X}^\infty,\mu^\infty)
=\prod\limits_{j=1}^\infty(X_j,{\cal X}_j,\mu_j)$. With such a
notation the following result holds.
\medskip\noindent
{\bf Theorem 2B.} {\it Let $\varphi_0,\varphi_1,\dots$,
$\varphi_0(x)\equiv1$, be a complete orthonormal system in the
Hilbert space $L_2(X,{\cal X},\mu)$. Then the functions
$\prod\limits_{j=1}^\infty\varphi_{k_j}(x_j)$, where only
finitely many indices $k_j$ differ from~0, form a complete
orthonormal basis in $L_2(X^\infty,{\cal X}^\infty,\mu^\infty)$.}
\medskip\noindent
{\bf Theorem 2C.} {\it Let $Y_1,Y_2,\dots$ be random variables
on a probability space $(\Omega,{\cal A},P)$ taking values in
a measurable space $(X,{\cal X})$. Let $\xi$ be a real valued
random variable measurable with respect to the $\sigma$-algebra
${\cal B}(Y_1,Y_2,\dots)$, and let $(X^\infty,{\cal X}^\infty)$
denote the infinite product
$(X\times X\times\cdots,{\cal X}\times {\cal X}\times\cdots)$
of the space $(X,{\cal X})$ with itself. Then there exists a
real valued, measurable function~$f$ on the space
$(X^\infty,{\cal X}^\infty)$ such that $\xi=f(Y_1,Y_2,\dots)$.}
\medskip\noindent
{\it Remark.}\/ Let us have a stationary random field
$X_n(\omega)$, $n\in\textrm{\BBB Z}_\nu$. Theorem~2C enables
us to extend the shift transformation $T_m$, defined as
$T_mX_n(\omega)=X_{n+m}(\omega)$, $n,\,m\in\textrm{\BBB Z}_\nu$,
for all random variables $\xi(\omega)$, measurable with respect
to the $\sigma$-algebra
${\cal B}(X_n(\omega),\,n\in\textrm{\BBB Z}_\nu)$. Indeed, by
Theorem~2C we can write
$\xi(\omega)=f(X_n(\omega),\,n\in\textrm{\BBB Z}_\nu)$,
and define
$T_m\xi(\omega)=f(X_{n+m}(\omega),\,n\in\textrm{\BBB Z}_\nu)$.
We still have to understand, that although the function~$f$ is
not unique in the representation of the random
variable~$\xi(\omega)$, the above definition of
$T_m\xi(\omega)$ is meaningful. To see this we have to
observe that if $f_1(X_n(\omega),\,n\in\textrm{\BBB Z}_\nu)
=f_2(X_n(\omega),\,n\in\textrm{\BBB Z}_\nu)$
for two functions $f_1$ and $f_2$ with probability~1, then
also $f_1(X_{n+m}(\omega),\,n\in\textrm{\BBB Z}_\nu)
=f_2(X_{n+m}(\omega),\,n\in\textrm{\BBB Z}_\nu)$
with probability~1 because of the stationarity of the random
field~$X_n(\omega)$, $n\in\textrm{\BBB Z}_\nu$. Let us also
observe that $\xi(\omega)\stackrel{\Delta}{=}T_m\xi(\omega)$
for all $m\in\textrm{\BBB Z}_\nu$. Besides, $T_m$ is a linear
operator on the linear space of random variables, measurable
with respect to the $\sigma$-algebra
${\cal B}(X_n,\,n\in\textrm{\BBB Z}_\nu)$. If we restrict it
to the space of square integrable random variables, then
$T_m$ is a unitary operator, and the operators $T_m$,
$m\in\textrm{\BBB Z}_\nu$, constitute a unitary
group.
\index{shift transformation (discrete random field)}
Let a stationary generalized field
$X=\{X(\varphi),\,\varphi\in{\cal S}\}$ be given. The
shift $T_t\xi$ of a random variable $\xi$, measurable
with respect to the $\sigma$-algebra
${\cal B}(X(\varphi),\,\varphi\in{\cal S})$ can be
defined for all $t\in R^\nu$ similarly to the discrete
case with the help of Theorem~2C and the following
result. If $\xi\in{\cal B}(X(\varphi),\,\varphi\in{\cal S})$
for a random variable~$\xi$, then there exists such a
countable subset $\{\varphi_1,\varphi_2,\dots\}\subset{\cal S}$
(depending on the random variable~$\xi$) for which $\xi$ is
${\cal B}(X(\varphi_1),X(\varphi_2),\dots)$ measurable. (We
write
$\xi(\omega)=f(X(\varphi_1)(\omega),X(\varphi_2)(\omega),\dots)$
with appropriate functions $f$, and $\varphi_1\in{\cal S}$,
$\varphi_2\in{\cal S}$,\dots, and define the shift $T_t\xi$ as
$T_t\xi(\omega)
=f(X(T_t\varphi_1)(\omega),X(T_t\varphi_2)(\omega),\dots)$,
where $T_t\varphi(x)=\varphi(x-t)$ for $\varphi\in{\cal S}$.) The
transformations $T_t$, $t\in R^{\nu}$, are linear operators
over the space of random variables measurable with respect to the
$\sigma$-algebra ${\cal B}(X(\varphi),\,\varphi\in{\cal S})$
with similar properties as their discrete counterpart.
\index{shift transformation (generalized random field)}
\medskip
Theorems 2A, 2B and 2C have the following important consequence.
\medskip\noindent
{\bf Theorem 2.1.} {\it Let $Y_1,Y_2,\dots$ be an orthonormal basis
in the Hilbert space ${\cal H}_1$ defined above with the help of a set
of Gaussian random variables $X_t$, $t\in T$. Then the set of all
possible finite products $H_{j_1}(Y_{l_1})\cdots H_{j_k}(Y_{l_k})$
is a complete orthogonal system in the Hilbert space ${\cal H}$
defined above. (Here $H_j(\cdot)$ denotes the $j$-th Hermite
polynomial.)}
\medskip\noindent
{\it Proof of Theorem 2.1.}\/ By Theorems~2A and~2B the set of
all possible products $\prod\limits_{j=1}^\infty H_{k_j}(x_j)$,
where only finitely many indices $k_j$ differ from 0, is a
complete orthonormal system in
$L_2\left(R^\infty,{\cal B}^\infty,\prod\limits_{j=1}^\infty
\frac{e^{-x_j^2/2}}{\sqrt{2\pi}}\,dx_j\right)$. Since
${\cal B}(X_t,\;t\in T)={\cal B}(Y_1,Y_2,\dots)$, Theorem~2C
implies that the mapping $f(x_1,x_2,\dots,)\to f(Y_1,Y_2,\dots)$
is a unitary transformation from
$L_2\left(R^\infty,{\cal B}^\infty,\prod\limits_{j=1}^\infty
\frac{e^{-x_j^2/2}}{\sqrt{2\pi}}\,dx_j\right)$ to ${\cal H}$.
(We call a transformation from a Hilbert space to another Hilbert
space unitary if it is norm preserving and invertible.) Since the
image of a complete orthogonal system under a unitary
transformation is again a complete orthogonal system,
Theorem~2.1 is proved. \hfill$\qed$
\medskip
Let ${\cal H}_{\le n}\subset{\cal H}$, $n=1,2,\dots$, (with the
previously introduced Hilbert space ${\cal H}$) denote the Hilbert
space which is the closure of the linear space consisting of the
elements $P_n(X_{t_1},\dots,X_{t_m})$, where $P_n$ runs through
all polynomials of degree less than or equal to~$n$, and the
integer~$m$ and indices $t_1,\dots,t_m\in T$ are arbitrary. Let
${\cal H}_0={\cal H}_{\le 0}$ consist of the constant functions,
and let ${\cal H}_n={\cal H}_{\le n}\ominus{\cal H}_{\le n-1}$,
$n=1,2,\dots$, where $\ominus$ denotes orthogonal completion. It
is clear that the Hilbert space ${\cal H}_1$ given in this
definition agrees with the previously defined Hilbert space
${\cal H}_1$. If $\xi_1,\dots,\xi_m\in{\cal H}_1$, and
$P_n(x_1,\dots,x_m)$ is a polynomial of degree $n$, then
$P_n(\xi_1,\dots,\xi_m)\in{\cal H}_{\le n}$. Hence Theorem~2.1
implies that
\begin{equation}
{\cal H}={\cal H}_0+{\cal H}_1+{\cal H}_2+\cdots, \label{(2.1)}
\end{equation}
where $+$ denotes direct sum. Now we introduce the following
\medskip\noindent
{\bf Definition of Wick polynomials.}
\index{Wick polynomials}
{\it Given a polynomial $P(x_1,\dots,x_m)$ of degree~$n$ and a set
of (jointly Gaussian) random variables
$\xi_1,\dots,\xi_m\in{\cal H}_1$, the Wick polynomial
\hbox{$\colon\!P(\xi_1,\dots,\xi_m)\!\colon$} is the orthogonal
projection of the random variable $P(\xi_1,\dots,\xi_m)$ to the
above defined subspace ${\cal H}_n$ of the Hilbert
space ${\cal H}$.}
\medskip
It is clear that Wick polynomials of different degree are orthogonal.
Given some $\xi_1,\dots,\xi_m\in{\cal H}_1$ define the subspaces
${\cal H}_{\le n}(\xi_1,\dots,\xi_m)\subset{\cal H}_{\le n}$, $n=1,2,\dots$,
as the set of all polynomials of the random variables
$\xi_1,\dots,\xi_m$ with degree less than or equal to~$n$. Let
${\cal H}_{\le0}(\xi_1,\dots,\xi_m)
={\cal H}_{0}(\xi_1,\dots,\xi_m)={\cal H}_0$,
and ${\cal H}_n(\xi_1,\dots,\xi_m)={\cal H}_{\le n}(\xi_1,\dots,\xi_m)\ominus
{\cal H}_{\le n-1}(\xi_1,\dots,\xi_m)$. With the help of
this notation we formulate the following
\medskip\noindent
{\bf Proposition 2.2.} {\it Let $P(x_1,\dots,x_m)$ be a polynomial
of degree~$n$. Then the random polynomial
$\colon\!P(\xi_1,\dots,\xi_m)\!\colon$ equals the orthogonal
projection of $P(\xi_1,\dots,\xi_m)$ to ${\cal H}_n(\xi_1,\dots,\xi_m)$.}
\medskip\noindent
{\it Proof of Proposition 2.2.}\/ Let
$\colon\!\bar P(\xi_1,\dots,\xi_m)\!\colon$
denote the projection of the random polynomial
$P(\xi_1,\dots,\xi_m)$ to ${\cal H}_n(\xi_1,\dots,\xi_m)$. Obviously
$$
P(\xi_1,\dots,\xi_m)-\colon\!\bar P(\xi_1,\dots,\xi_m)\!\colon\in
{\cal H}_{\le n-1}(\xi_1,\dots,\xi_m)\subseteq {\cal H}_{\le n-1}.
$$
Hence in order to prove Proposition~2.2 it is enough to show that for all
$\eta\in{\cal H}_{\le n-1}$
\begin{equation}
E\colon\!\bar P(\xi_1,\dots,\xi_m)\!\colon\eta=0, \label{(2.2)}
\end{equation}
since this means that $\colon\!\bar P(\xi_1,\dots,\xi_m)\!\colon$ is the
orthogonal projection of $P(\xi_1,\dots,\xi_m)\in{\cal H}_{\le n}$
to~${\cal H}_{\le n-1}$.
Let $\varepsilon_1,\varepsilon_2,\dots$ be an orthonormal
system in ${\cal H}_1$, also
orthonormal to $\xi_1,\dots,\xi_m$, and such that
$\xi_1,\dots,\xi_m,\varepsilon_1,\varepsilon_2,\dots$ form
a basis in ${\cal H}_1$.
If $\eta=\prod\limits_{i=1}^m\xi_i^{l_i}
\prod\limits_{j=1}^\infty\varepsilon_j^{k_j}$ with
such exponents $l_i$ and $k_j$ that $\sum l_i+\sum k_j\le n-1$,
then (\ref{(2.2)}) holds for this random variable $\eta$
because of the independence of the random variables $\xi_i$
and $\varepsilon_j$. Since the linear combinations of such
$\eta$ are dense in ${\cal H}_{\le n-1}$,
formula~(\ref{(2.2)}) and Proposition~(2.2) are proved.
\hfill$\qed$
\medskip\noindent
{\bf Corollary 2.3.} {\it Let $\xi_1,\dots,\xi_m$ be an orthonormal
system in ${\cal H}_1$, and let
$$
P(x_1,\dots,x_m)=\sum c_{j_1,\dots,j_m}x^{j_1}\cdots x_m^{j_m}
$$
be a homogeneous polynomial, i.e. let $j_1+\cdots j_m=n$ with some
fixed number~$n$ for all sets $(j_1,\dots,j_m)$ appearing in this
summation. Then
$$
\colon\!P(\xi_1,\dots,\xi_m)\!\colon=\sum
c_{j_1,\dots,j_m}H_{j_1}(\xi_1)\cdots H_{j_m}(\xi_m).
$$
In particular,
$$
\colon\!\xi^n\!\colon=H_n(\xi) \quad \textrm{if } \xi\in{\cal H}_1,
\textrm{ and } E\xi^2=1.
$$
}
\medskip\noindent
{\it Remark.} Although we have defined the Wick polynomial (of degree~$n$)
for all polynomials $P(\xi_1,\dots,\xi_m)$ of degree~$n$, we could
have restricted our attention only to homogeneous polynomials of degree
$n$, since the contribution of each terms
$c(j_1,\dots j_m)\xi^{l_1}_1\cdots \xi_m^{l_m}$ of the polynomial
$P(\xi_1,\dots,\xi_m)$ such that $l_1+\cdots+l_m0.
\label{(3.3)}
\end{equation}
}
\medskip\noindent
{\it Remark.}\/ The above formulated results are actually not the
Bochner and Bochner--Schwartz theorem in their original form, they
are their consequences. In an Adjustment to Chapter~3 I formulate
the classical form of these theorems, and explain how the above
formulated results follow from them.
\medskip
The measure $G$ appearing in Theorems~3A and~3B is called the spectral
measure of the stationary field. A measure~$G$ with the same
properties as the measure~$G$ in Theorem~3A or~3B will also be called
a spectral measure. This terminology is justified, since there exists
a stationary random field with spectral measure~$G$ for all such~$G$.
\index{spectral measure of a stationary random field
(for discrete and generalized random fields)}
Let us now consider a stationary Gaussian random field
(discrete or generalized one) with spectral measure $G$. We
shall denote the space $L_2([-\pi,\pi)^\nu,{\cal B}^\nu,G)$
or $L_2(R^\nu,{\cal B}^\nu,G)$ simply by $L^2_G$. Let
${\cal H}_1$ be the real Hilbert space defined by means of
the stationary random field, as it was done in Chapter~2. Let
${\cal H}^c_1$ denote its complexification, i.e. the elements of
${\cal H}^c_1$ are of the form $X+iY$, \ $X,\,Y\in{\cal H}_1$,
and the scalar product is defined as
$(X_1+iY_1,X_2+iY_2)=EX_1X_2+EY_1Y_2+i(EY_1X_2-EX_1Y_2)$.
We are going to construct a unitary transformation $I$
from $L^2_G$ to ${\cal H}^c_1$. We shall define the random
spectral measure via this transformation.
Let ${\cal S}^c$ denote the Schwartz space of rapidly decreasing,
smooth, complex valued functions with the usual topology of the
Schwartz space. (The elements of ${\cal S}^c$ are of the form
$\varphi+i\psi$, \ $\varphi,\,\psi\in{\cal S}$.) We make the
following observation. The finite linear combinations
$\sum c_ne^{i(n,x)}$ are dense in $L_G^2$ in the discrete field,
and the functions $\varphi\in{\cal S}^c$ are dense in $L_G^2$ in
the generalized field case. In the discrete field case this
follows from the Weierstrass approximation theorem, which states
that all continuous functions on $[-\pi,\pi)^\nu$ can be
approximated arbitrary well in the supremum norm by
trigonometrical polynomials. In the generalized field case let
us first observe that the continuous functions with compact
support are dense in $L^2_G$. We claim that also the functions
of the space ${\cal D}$ are dense in $L^2_G$, where ${\cal D}$
denotes the class of (complex valued) infinitely many times
differentiable functions with compact support. Indeed, if
$\varphi\in{\cal D}$ is real valued, $\varphi(x)\ge0$ for all
$x\in R^\nu$, $\int\varphi(x)\,dx=1$, we define
$\varphi_t(x)=t^\nu\varphi\left(\frac xt\right)$, and $f$ is a
continuous function with compact support, then
$f*\varphi_t\to f$ uniformly as $t\to\infty$. Here $*$ denotes
convolution. On the other hand, $f*\varphi_t\in{\cal D}$ for
all $t>0$. Hence ${\cal D}\subset{\cal S}^c$ is dense in $L^2_G$.
Finally we recall the following result from the theory of
distributions. The mapping $\varphi\to\tilde\varphi$ is an
invertible, bicontinuous transformation from ${\cal S}^c$
into ${\cal S}^c$. In particular, the set of functions
$\tilde\varphi$, \ $\varphi\in{\cal S}$, is also dense
in $L^2_G$.
Now we define the mapping
\begin{equation}
I\left(\sum c_n e^{i(n,x)}\right)=\sum c_nX_n \label{(3.4)}
\end{equation}
in the discrete case, where the sum is finite, and
\begin{equation}
I(\widetilde{\varphi+i\psi)}=X(\varphi)+iX(\psi),
\quad \varphi,\,\psi\in{\cal S} \label{($3.4'$)}
\end{equation}
in the generalized case.
Obviously,
\begin{eqnarray*}
\left\|\sum c_ne^{i(n,x)}\right\|_{L^2_G}^2
&=&\sum\sum c_n\bar c_m\int e^{i(n-m),x}G(\,dx)\\
&=&\sum\sum c_n\bar c_m EX_nX_m=E\left|\sum c_nX_n\right|^2,
\end{eqnarray*}
and
\begin{eqnarray*}
\|\widetilde{\varphi+i\psi}\|^2_{L^2_G}
&=&\int[\tilde\varphi(x)\bar{\tilde\varphi}(x)
-i\tilde\varphi(x)\bar{\tilde\psi}(x)
+i\tilde\psi(x)\bar{\tilde\varphi}(x)
+\tilde\psi(x)\bar{\tilde\psi}(x)]G(\,dx)\\
&=&EX(\varphi)^2-iEX(\varphi)X(\psi)+iEX(\psi)X(\varphi)
+EX(\psi)^2 \\
&=&E\left(|X(\varphi)+iX(\psi)|\right)^2.
\end{eqnarray*}
This means that the mapping $I$ from a linear subspace of $L_G^2$
to ${\cal H}_1^c$ is norm preserving. Besides, the subspace
where $I$ was defined is dense in $L^2_G$, since the space of
continuous functions is dense in $L^2_G$ if $G$ is a finite
measure on the torus $R^\nu/2\pi\textrm{\BBB Z}_\nu$, and the
space of continuous functions with a compact support is dense
in $L^2_G(R^{\nu})$ if the measure~$G$ satisfies
relation~(\ref{(3.3)}). Hence the mapping $I$ can be uniquely
extended to a norm preserving transformation from $L^2_G$ to
${\cal H}^c_1$. Since the random variables $X_n$ or $X(\varphi)$
are obtained as the image of some element from $L_G^2$ under this
transformation, $I$ is a unitary transformation from $L^2_G$ to
${\cal H}^c_1$. A unitary transformation preserves not only the
norm, but also the scalar product. Hence
$\int f(x)\bar g(x)G(\,dx)=EI(f)\overline{I(g)}$ for all
$f,\,g\in L^2_G$.
Now we define the random spectral measure $Z_G(A)$ for all
$A\in{\cal B}^\nu$ such that $G(A)<\infty$ by the formula
$$
Z_G(A)=I(\chi_A),
$$
where $\chi_A$ denotes the indicator function of the set~$A$. It
is clear that
\medskip
\begin{description}
\item [(i)] The random variables $Z_G(A)$ are complex valued,
jointly Gaussian random variables. (The random variables
$\textrm{Re}\, Z_G(A)$ and $\textrm{Im}\, Z_G(A)$ with possibly
different sets~$A$ are jointly Gaussian.)
\item [(ii)] $EZ_G(A)=0$,
\item [(iii)] $EZ_G(A)\overline {Z_G(B)}=G(A\cap B)$,
\item [(iv)] $\sum\limits_{j=1}^nZ_G(A_j)
=Z_G\left(\bigcup\limits_{j=1}^n A_j\right)$ if
$A_1,\dots,A_n$ are disjoint sets.
Also the following relation holds.
\item [(v)] $Z_G(A)=\overline{Z_G(-A)}$.
This follows from the relation
\item [(v$'$)] $I(f)=\overline{I(f_-)}$ for all $f\in L^2_G$, where
$f_-(x)=\overline{f(-x)}$.
\end{description}
Relation (v$'$) can be simply checked if $f$ is a finite
trigonometrical polynomial in the discrete field case, or if
$f=\tilde\varphi$, $\varphi\in{\cal S}^c$, in the generalized field
case. (In the case $f=\tilde\varphi$, $\varphi\in{\cal S}^c$, the
following argument works. Put
$f(x)=\tilde\varphi_1(x)+i\tilde\varphi_2(x)$ with
$\varphi_1,\varphi_2\in{\cal S}$. Then $I(f)=X(\varphi_1)+iX(\varphi_2)$,
and $f_-(x)=\bar{\tilde\varphi}_1(-x)-i\bar{\tilde\varphi}_2(-x)
=\tilde\varphi_1(x)+i(\widetilde{-\varphi_2}(x)$, hence
$I(f_-)=X(\varphi_1)+iX(-\varphi_2)=X(\varphi_1)-iX(\varphi_2)
=\overline{I(f)}$.)
Then a simple limiting procedure implies~(v$'$) in the general
case. Relation~(iii) follows from the identity
$EZ_G(A)\overline {Z_G(B)}=EI(\chi_A)\overline{I(\chi_B)}
=\int \chi_A(x)\overline{\chi_B(x)}G(\,dx)=G(A\cap B)$. The remaining
properties of $Z_G(\cdot)$ are simple consequences of the definition.
\medskip\noindent
{\it Remark.}\/ Property (iv) could have been omitted from the
definition of random spectral measures, since it follows from
property~(iii). To show this it is enough to check that if
$A_1,\dots,A_n$ are disjoint sets, and property~(iii) holds, then
$$
E\left(\sum_{j=1}^n Z_G(A_j)-Z_G\left(\bigcup\limits_{j=1}^n A_j\right)\right)
\overline{\left(\sum_{j=1}^n Z_G(A_j)-Z_G
\left(\bigcup\limits_{j=1}^n A_j\right)\right)}=0.
$$
\medskip
Now we introduce the following
\medskip\noindent
{\bf Definition of random spectral measure.}
\index{Gaussian random spectral measure.}
{\it Let $G$ be a spectral measure. A set of random variables
$Z_G(A)$, $G(A)<\infty$, satisfying (i)--(v) is called a
(Gaussian) random spectral measure corresponding to the
spectral measure~$G$.}
\medskip
Given a Gaussian random spectral measure $Z_G$ corresponding to a
spectral measure $G$ we define the (one-fold) stochastic integral
$\int f(x)Z_G(\,dx)$ for an appropriate class of functions~$f$.
\index{one-fold stochastic integral}
Let us first consider simple functions of the form
$f(x)=\sum c_i\chi_{A_i}(x)$, where the sum is finite, and
$G(A_i)<\infty$ for all indices~$i$. In this case we define
$$
\int f(x)Z_G(\,dx)=\sum c_iZ_G(A_i).
$$
Then we have
\begin{equation}
E\left|\int f(x)Z_G(\,dx)\right|^2=\sum c_i\bar c_jG(A_i\cap A_j)
=\int |f(x)|^2G(\,dx). \label{(3.5)}
\end{equation}
Since the simple functions are dense in $L^2_G$,
relation~(\ref{(3.5)}) enables us to define
$\int f(x)Z_G(\,dx)$ for all $f\in L^2_G$ via
$L_2$-continuity. It can be checked that the expressions
\begin{equation}
X_n=\int e^{i(n,x)}Z_G(\,dx), \quad n\in\textrm{\BBB Z}_\nu,
\label{(3.6)}
\end{equation}
and
\begin{equation}
X(\varphi)=\int\tilde\varphi(x) Z_G(\,dx), \quad \varphi\in{\cal S},
\label{($3.6'$)}
\end{equation}
defined with the help of the above defined (random) integral and
spectral measure~$Z_G$ are Gaussian stationary random discrete and
generalized field with spectral measure~$G$.
We also have
$$
\int f(x)Z_G(\,dx)=I(f) \quad \textrm{for all } f\in L_G^2
$$
if we consider the previously defined mapping $I(f)$ with the
stationary random fields defined in~(\ref{(3.6)})
and~(\ref{($3.6'$)}). Now we
formulate the following
\medskip\noindent
{\bf Theorem 3.1.} {\it For a stationary Gaussian random field (a
discrete or generalized one) with a spectral measure $G$ there exists
a unique Gaussian random spectral measure $Z_G$ corresponding to the
spectral measure~$G$ on the same probability space as the Gaussian
random field such that relation~(\ref{(3.6)})
or~(\ref{($3.6'$)}) holds in the
discrete or generalized field case respectively.
Furthermore
\begin{equation}
{\cal B}(Z_G(A),\; G(A)<\infty)=\left\{
\begin{array}{l}
{\cal B}(X_n,\;n\in\textrm{\BBB Z}_\nu)
\textrm{ in the discrete field case,}\\
{\cal B}(X(\varphi),\;\varphi\in{\cal S})
\textrm{ in the generalized field case.}
\end{array} \right. \label{(3.7)}
\end{equation}
}
\medskip
If a stationary Gaussian random field $X_n$, $n\in\textrm{\BBB}_\nu$,
or $X(\varphi)$, $\varphi\in{\cal S}$, and a random spectral
measure $Z_G$ satisfy relation~(\ref{(3.6)}) or~(\ref{($3.6'$)}),
then we say that this random spectral measure is adapted to
this Gaussian random field.
\index{random spectral measure adapted to a Gaussian
random field}
\medskip\noindent
{\it Proof of Theorem 3.1.}\/ Given a stationary Gaussian
random field (discrete or stationary one) with a spectral
measure~$G$, we have constructed a random spectral measure
$Z_G$ corresponding to the spectral measure~$G$. Moreover,
the random integrals given in formulas~(\ref{(3.6)})
or~(\ref{($3.6'$)}) define the original stationary random
field. Since all random variables $Z_G(A)$ are measurable with
respect to the original random field, relation~(\ref{(3.6)})
or~(\ref{($3.6'$)})
implies~(\ref{(3.7)}).
To prove the uniqueness, it is enough to observe that because
of the linearity and $L_2$ continuity of stochastic integrals
relation~(\ref{(3.6)}) or~(\ref{($3.6'$)}) implies that
$$
Z_G(A)=\int \chi_A(x)Z_G(\,dx)=I(\chi_A)
$$
for a Gaussian random spectral measure corresponding to the spectral
measure~$G$ appearing in Theorem~3.1. \hfill$\qed$
\medskip
Finally we list some additional properties of Gaussian
random spectral measures.
\index{Gaussian random spectral measure.}
%\medskip
\begin{description}
\item[(vi)] The random variables $\textrm{Re}\, Z_G(A)$ are
independent of the random variables $\textrm{Im}\, Z_G(A)$.
\item[(vii)] Random variables of the form $Z_G(A\cup(-A))$ are real
valued. If the sets $A_1\cup(-A_1)$,\dots, $A_n\cup(-A_n)$ are
disjoint, then the random variables $Z_G(A_1)$,\dots, $Z_G(A_n)$ are
independent.
\item[(viii)] If $A\cap(-A)=\emptyset$, then
$\textrm{Re}\, Z_G(-A)=\textrm{Re}\, Z_G(A)$,
$\textrm{Im}\, Z_G(-A)=-\textrm{Im}\, Z_G(A)$, and the (Gaussian)
random variables
$\textrm{Re}\, Z_G(A)$ and $\textrm{Im}\, Z_G(A)$ are
independent with expectation zero and variance $G(A)/2$.
\end{description}
\medskip
These properties easily follow from (i)--(v). Since $Z_G(\cdot)$
are complex valued Gaussian random variables, to prove the above
formulated independence it is enough to show that the real and
imaginary parts are uncorrelated. We show, as an example,
the proof of~(vi).
\begin{eqnarray*}
E\textrm{Re}\, Z_G(A)\textrm{Im}\, Z_G(B)&&=\frac1{4i}
E(Z_G(A)+\overline{Z_G(A)})
(Z_G(B)-\overline{Z_G(B)})\\
&&=\frac1{4i}E(Z_G(A)+Z_G(-A))(\overline{Z_G(-B)}-\overline{Z_G(B)})\\
&&=\frac1{4i}G(A\cap(-B))-\frac1{4i}G(A\cap B)\\
&&\qquad+\frac1{4i}G((-A)\cap(-B))-\frac1{4i}G((-A)\cap B)=0
\end{eqnarray*}
for all pairs of sets $A$ and $B$ such that $G(A)<\infty$, $G(B)<\infty$,
since $G(D)=G(-D)$ for all $D\in{\cal B}^\nu$. The fact that
$Z_G(A\cup(-A))$ is real valued random variable, and the relations
$\textrm{Re}\, Z_G(-A)=\textrm{Re}\, Z_G(A)$,
$\textrm{Im}\, Z_G(-A)=-\textrm{Im}\, Z_G(A)$ under the conditions
of~(viii) follow directly from~(v). The remaining statements of~(vii)
and~(viii) can be proved similarly to~(vi) only the calculations are
simpler in this case.
The properties of the random spectral measure $Z_G$ listed above imply
in particular that the spectral measure~$G$ determines the joint
distribution of the corresponding random variables $Z_G(B)$,
$B\in{\cal B}^\nu$.
\section{On the spectral representation of the covariance
function of stationary random fields}
The results formulated under the name of Bochner and
Bochner--Schwartz theorem (I write this, because actually
I presented not these theorems but an important consequence
of them) have the following content. Given a finite, even
measure~$G$ on the torus
$R^{\nu}/2\pi\textrm{\BBB Z}_\nu$ one can define a (Gaussian)
discrete stationary field with correlation function
satisfying~(\ref{(3.1)})
with this measure~$G$. For an even measure $G$ on~$R^\nu$
satisfying~(\ref{(3.3)}) there exists a (Gaussian) generalized
stationary field with correlation function defined in
formula~(\ref{(3.2)}) with this measure $G$. The Bochner and
Bochner--Schwartz theorems state that the correlation function
of all (Gaussian) discrete stationary fields, respectively of
all stationary generalized fields can be represented
in such a way. Let us explain this in more detail.
First I formulate the following
\medskip\noindent
{\bf Proposition~3C.} {\it Let $G$ be a finite measure on
the torus $R^{\nu}/2\pi\textrm{\BBB Z}_\nu$ such that
$G(A)=G(-A)$ for all measurable sets~$A$. Then there exists
a Gaussian discrete stationary field $X_n$,
$n\in\textrm{\BBB Z}_\nu$, with expectation zero such that
its correlation function $r(n)=EX_kX_{k+n}$,
$n,k\in\textrm{\BBB Z}_\nu$, is given by
formula~(\ref{(3.1)}) with this measure~$G$.
Let $G$ be a measure on~$R^\nu$ satisfying~(\ref{(3.3)}) and such that
$G(A)=G(-A)$ for all measurable sets~$A$. Then there exists a Gaussian
stationary generalized field $X(\varphi)$, $\varphi\in{\cal S}$, with
expectation $EX(\varphi)=0$ for all $\varphi\in{\cal S}$ such that
its covariance function $EX(\varphi)X(\psi)$, $\varphi,\psi\in{\cal S}$,
satisfies formula~(\ref{(3.2)}) with this measure~$G$.
Moreover, the correlation function $r(n)$ or $EX(\varphi)X(\psi)$,
$\varphi,\psi\in{\cal S}$, determines the measure~$G$ uniquely.}
\medskip\noindent
{\it Proof of Proposition~3C.}\/ By Kolmogorov's theorem about
the existence of random processes with consistent finite
dimensional distributions it is enough to prove the following
statement to show the existence of the Gaussian discrete
stationary field with the demanded properties. For any points
$n_1,\dots,n_p\in\textrm{\BBB Z}_\nu$ there exists a Gaussian
random vector $(X_{n_1},\dots,X_{n_p})$ with expectation zero
and covariance matrix $EX_{n_j}X_{n_k}=r(n_j-n_k)$. (Observe
that the function~$r(n)$ is real valued, $r(n)=r(-n)$, because
of the evenness of the spectral measure~$G$.) Hence it is enough
to check that the corresponding matrix is positive definite,
i.e. $\sum\limits_{j,k} c_jc_k r(n_j-n_k)\ge0$ for all real
vectors $(c_1,\dots,c_p)$. This relation holds, because
$\sum\limits_{j,k} c_jc_k r(n_j-n_k)=\int
|\sum\limits_j c_je^{i(n_j,x)}|^2\,G(\,dx)\ge0$ by
formula~(\ref{(3.1)}).
It can be proved similarly that in the generalized field case
there exists a Gaussian random field with expectation zero whose
covariance function satisfies formula~(\ref{(3.2)}). (Let us
observe that the relation $G(A)=G(-A)$ implies that
$EX(\varphi)X(\psi)$ is a real number for all
$\varphi,\,\psi\in{\cal S}$, since
$EX(\varphi)X(\psi)=\overline{EX(\varphi)X(\psi)}$ in this case.
In the proof of this identity we exploit that
$\bar{\tilde f}(x)=\tilde f(-x)$ for a real valued function~$f$.)
We also have to show that a random field with such a distribution is
a generalized field, i.e. it satisfies properties~(a) and~(b) given
in the definition of generalized fields. It is not difficult to show
that if $\varphi_n\to\varphi$ in the topology of the space
${\cal S}$, then $E[X(\varphi_n)-X(\varphi)]^2
=\int|\tilde\varphi_n(x)-\tilde\varphi(x)|^2 G(\,dx)\to0$ as
$n\to\infty$, hence property~(b) holds. (Here we exploit that the
transformation $\varphi\to\tilde\varphi$ is bicontinuous in the
space~${\cal S}$.) Property~(a) also holds, because, as it is not
difficult to check with the help of formula~(\ref{(3.2)}),
\begin{eqnarray*}
&&E[a_1X(\varphi_1)+a_2X(\varphi_2)
-X(\varphi(a_1\varphi_1+a_2\varphi_2)]^2\\
&&\qquad =\int\left|a_1\tilde\varphi_1(x)+a_2\tilde\varphi_2(x)
-(\widetilde{a_1\varphi_1+a_2\varphi_2})(x)\right|^2G(\,dx)=0.
\end{eqnarray*}
It is clear that the Gaussian random field constructed in
such a way is stationary.
Finally, as we have seen in our considerations in the main
text, the correlation function determines the integral
$\int f(x)\,G(\,dx)$ for all continuous functions~$f$ with
a bounded support, hence it also determines the
measure~$G$. \hfill$\qed$
\medskip
The Bochner and Bochner--Schwartz theorems enable us to
show that the correlation function of all stationary
(Gaussian) fields (discrete or generalized one) can be
presented in the above way with an appropriate spectral
measure~$G$. To see this let us formulate these results
in their original form.
To formulate Bochner's theorem first we introduce the
following notion.
\medskip\noindent
{\bf Definition of positive definite functions.}
\index{positive definite function}
{\it Let $f(x)$ be a (complex valued) function on
$\textrm{\BBB Z}_\nu$ (or on $R^\nu$). We say that $f(\cdot)$
is a positive definite function if for all parameters~$p$,
complex numbers $c_1,\dots,c_p$ and points $x_1,\dots,x_p$ in
$\textrm{\BBB Z}_\nu$ (or in $R^\nu$) the inequality
$$
\sum_{j=1}^p\sum_{k=1}^p c_j\bar c_k f(x_j-x_k)\ge0
$$
holds.}
\medskip
A simple example for positive definite functions is the function
$f(x)=e^{i(t,x)}$, where $t\in\textrm{\BBB Z}_\nu$ in the discrete,
and $t\in R^\nu$ in the continuous case. Bochner's theorem provides
a complete description of positive definite functions.
\medskip\noindent
{\bf Bochner's theorem. (Its original form.)}
\index{Bochner theorem}
{\it A complex valued function $f(x)$ defined on
$\textrm{\BBB Z}_\nu$ is positive definite if and only if it can
be written in the form $f(x)=\int e^{i(t,x)}G(\,dx)$ for all
$x\in\textrm{\BBB Z}_\nu$ with a finite measure~$G$ on the
torus~$R^\nu/2\pi\textrm{\BBB Z}_\nu$. The measure~$G$ is uniquely
determined.
A complex valued function $f(x)$ defined on $R^\nu$ is continuous
in the origin and positive definite if and only if it can be
written in the form $f(x)=\int e^{i(t,x)}G(\,dx)$ for all
$x\in R^\nu$ with a finite measure~$G$ on $R^\nu$. The
measure~$G$ is uniquely determined.}
\medskip
It is not difficult to see that the covariance function
$r(n)=EX_kX_{k+n}$, ($EX_n=0$), $k,n\in\textrm{\BBB Z}_\nu$,
of a stationary (Gaussian) random field~$X_n$ is a positive
definite function, since
$\sum\limits_{j,k} c_j\bar c_kr(n_j-n_k)=E|\sum\limits_j c_jX_{n_j}|^2>0$
for any vector $(c_1,\dots,c_p)$. Hence Bochner's theorem can be applied
for it. Besides, the relation $r(n)=r(-n)$ together with the
uniqueness of the measure~$G$ appearing in Bochner's theorem imply that
the identity $G(A)=G(-A)$ holds for all measurable sets~$G$. This
implies the result formulated in the main text under the name Bochner's
theorem.
\medskip
The Bochner--Schwartz theorem yields an analogous representation
of positive definite generalized functions in~${\cal S}'$ as the
Fourier transforms of positive generalized functions in~${\cal S}'$.
It also states a similar result about generalized functions in the
space~${\cal D}'$. To formulate it we have to introduce some
definitions. First we have to clarify what a positive generalized
function is. We introduce this notion both in the
space~${\cal S}'$ and ${\cal D}'$, and then we characterize them
in a Theorem.
\medskip\noindent
{\bf Definition of positive generalized functions.}
\index{positive generalized function}
{\it A linear functional $F\in{\cal S}'$ (or $F\in{\cal D}'$) is
called a positive generalized function if for all such
$\varphi\in{\cal S}$ (or $\varphi\in{\cal D}$) test functions
for which $\varphi(x)\ge0$ for all $x\in R^\nu$ $(F,\varphi)\ge0$.}
\medskip\noindent
{\bf Theorem about the representation of positive generalized
functions.} {\it All positive generalized functions
$F\in{\cal S}'$ can be given in the form
$(F,\varphi)=\int \varphi(x)\mu(\,dx)$, where $\mu$ is a
polynomially increasing measure on $R^\nu$, i.e.\ it satisfies
the relation $\int(1+|x|^2)^{-p}\mu(\,dx)<\infty$ with
some $p>0$. Similarly, all positive generalized functions in
${\cal D}'$ can be given in the form
$(F,\varphi)=\int \varphi(x)\mu(\,dx)$ with such a measure
$\mu$ on $R^\nu$ which is finite in all bounded regions. The
positive generalized function~$F$ uniquely determines the
measure~$\mu$ in both cases.}
\medskip
We also introduce a rather technical notion and formulate a result
about it. Let us remark that if $\varphi\in{\cal S}^c$ and
$\psi\in{\cal S}^c$, then also their product $\varphi\psi\in{\cal S}^c$.
The analogous result also holds in the space ${\cal D}$.
\medskip\noindent
{\bf Definition of multiplicatively positive generalized functions.}
\index{multiplicatively positive generalized function}
{\it A generalized function ${\cal F}\in{\cal S}'$ (or $F\in{\cal D}'$) is
multiplicatively positive if
$(F,\varphi\bar\varphi)=(F,|\varphi|^2)\ge0$ for all
$\varphi\in{\cal S}^c$ (or in $\varphi\in{\cal D}$).}
\medskip\noindent
{\bf Theorem about the characterization of multiplicatively positive
generalized functions.} {\it A generalized function $F\in{\cal S}'$
(or $F\in{\cal D}'$) is multiplicatively positive if and only if it is
positive.}
\medskip
Now I introduce the definition of positive definite generalized
functions.
\medskip\noindent
{\bf Definition of positive definite generalized functions.}
\index{positive definite generalized function}
{A generalized function $F\in{\cal S}'$ (or $F\in{\cal D}'$) is
positive definite if $(F,\varphi*\varphi^*)\ge0$ for all
$\varphi\in{\cal S}^c$ (of $\varphi\in{\cal D}$), where
$\varphi^*(x)=\overline {\varphi(-x)}$, and $*$ denotes
convolution, i.e. $\varphi*\varphi^*(x)=\int\varphi(t)
\overline{\varphi(t-x)}\,dt$.}
\medskip
We refer to~\cite{r15} for an explanation why this
definition of positive definite generalized functions is
natural. Let us remark that if $\varphi,\psi\in{\cal S}^c$,
then $\varphi*\psi\in{\cal S}^c$, and the analogous result
holds in~${\cal D}$. The original version of the
Bochner--Schwartz theorem has the following form.
\medskip\noindent
{\bf Bochner--Schwartz theorem. (Its original form.)}
\index{Bochner--Schwartz theorem}
{\it Let $F$ be a positive definite generalized function in
the space ${\cal S}'$ (or ${\cal D}'$). Then it is the Fourier
transform of a polynomially increasing measure~$\mu$ on
$R^\nu$, i.e. the identity
$(F,\varphi)=\int\tilde\varphi(x)\,\mu(\,dx)$ holds for all
$\varphi\in{\cal S}^c$ (or $\varphi\in{\cal D}$) with a
measure $\mu$ that satisfies the relation
$\int(1+|x|^2)^{-p}\mu(\,dx)<\infty$ with an appropriate
$p>0$. The generalized function~$F$ uniquely determines the
measure~$\mu$. On the other hand, if $\mu$ is a polynomially
increasing measure on $R^\nu$, then the formula
$(F,\varphi)=\int \tilde\varphi(x)\mu(\,dx)$ with
$\varphi\in{\cal S}^c$ (or $\varphi\in{\cal D}$) defines
a positive definite generalized function~$F$ in the space
${\cal S}'$ (or ${\cal D}'$).}
\medskip\noindent
{\it Remark.} It is a remarkable and surprising fact that the class
of positive definite generalized functions are represented by the
same class of measures~$\mu$ in the spaces ${\cal S}'$ and ${\cal D}'$.
(In the representation of positive generalized functions the class
of measures~$\mu$ considered in the case of ${\cal D}'$ is much
larger, than in the case of ${\cal S}'$.) Let us remark that in the
representation of the positive definite generalized functions in
${\cal D}'$ the function $\tilde\varphi$ we integrate is not in the
class~${\cal D}$, but in the space~${\cal Z}$ consisting of the Fourier
transforms of the functions in~${\cal D}$.
\medskip
It is relatively simple to prove the representation of positive
definite generalized functions given in the Bochner--Schwartz
theorem for the class~${\cal S}'$. Some calculation shows that if $F$ is
a positive definite generalized function, then its Fourier transform
is a multiplicatively positive generalized function. Indeed, since
the Fourier transform of the convolution $\varphi*\psi(x)$ equals
$\tilde\varphi(t)\tilde\psi(t)$, and the Fourier transform of
$\varphi^*(x)=\overline{\varphi(-x)}$ equals $\overline{\tilde\varphi(t)}$,
the Fourier transform of $\varphi*\varphi^*(x)$ equals
$\tilde\varphi(t)\bar{\tilde\varphi}(t)$. Hence the positive
definitiveness property of the generalized function~$F$ and the
definition of the Fourier transform of generalized functions imply that
$(\tilde F,\tilde\varphi\bar{\tilde\varphi})
=(2\pi)^{\nu}(F,\varphi*\varphi^*)\ge0$ for all $\varphi\in{\cal S}^c$.
Since every function of~${\cal S}^c$ is the Fourier transform
$\tilde\varphi$ of some function $\varphi\in{\cal S}^c$ this implies
that $\tilde F$ is a multiplicatively positive and as a consequence
a positive generalized function in ${\cal S}'$. Such generalized
functions have a good representation with the help of a polynomially
increasing positive measure~$\mu$. Since
$(F,\varphi)=(2\pi)^{-\nu}(\tilde F,\tilde\varphi)$ it is not difficult
to prove the Bochner--Schwartz theorem for the space~${\cal S}'$ with the
help of this fact. The proof is much harder if the space~${\cal D}'$
is considered, but we do not need that result.
The Bochner--Schwartz theorem in itself is not sufficient to describe
the correlation function of a generalized random fields. We still
need another important result of Laurent Schwartz which gives useful
information about the behaviour of (Hermitian) bilinear functionals
in ${\cal S}^c$ and some additional information about the behaviour of
translation invariant (Hermitian) bilinear functionals in this space.
To formulate these results first we introduce the following definition.
\medskip\noindent
{\bf Definition of Hermitian bilinear and translation invariant
Hermitian bilinear functionals in the space ${\cal S}^c$.} {\it A function
$B(\varphi,\psi)$, $\varphi,\psi\in{\cal S}^c$, is a Hermitian bilinear
functional in the space ${\cal S}^c$ if for all fixed $\psi\in{\cal S}^c$
$B(\varphi,\psi)$ is a continuous linear functional of the
variable~$\psi$ in the topology of~${\cal S}^c$, and for all fixed
$\varphi\in{\cal S}^c$ $\overline{B(\varphi,\psi)}$ is a continuous
linear functional of the variable~$\psi$ in the topology of~${\cal S}^c$.
A Hermitian bilinear functional $B(\varphi,\psi)$ in ${\cal S}^c$ is
translation invariant if it does not change by a simultaneous shift
of its variables $\varphi$ and $\psi$, i.e.\ if
$B(\varphi(x),\psi(x))=B(\varphi(x+h),\psi(x+h))$ for all $h\in R^\nu$.}
\medskip\noindent
{\bf Definition of positive definite Hermitian bilinear
functionals.}
\index{positive definite Hermitian bilinear functional}
{\it We say that a Hermitian bilinear functional
$B(\varphi,\psi)$ in ${\cal S}^c$ is positive definite if
$B(\varphi,\varphi)\ge0$ for all $\varphi\in{\cal S}^c$.}
\medskip
The next result characterizes the Hermitian bilinear and
translation invariant Hermitian bilinear functionals
in~${\cal S}^c$.
\medskip\noindent
{\bf Theorem 3D.} {\it All Hermitian bilinear functionals
$B(\varphi,\psi)$ in ${\cal S}^c$ can be given in the form
$B(\varphi,\psi)=(F_1,\varphi(x)\overline{\psi(y)})$,
$\varphi,\psi\in{\cal S}^c$, where $F_1$ is a continuous
linear functional on ${\cal S}^c\times{\cal S}^c$, i.e. it
is a generalized function in~${{\cal S}_{2\nu}}'$.
A translation invariant Hermitian bilinear functional in ${\cal S}^c$
can be given in the form ${\cal B}(\varphi,\psi)=(F,\varphi*\psi^*)$,
$\varphi,\psi\in{\cal S}^c$, where $F\in{\cal S}$,
$\psi^*(x)=\overline\psi(-x)$, and $*$ denotes convolution.
The Hermitian bilinear form $B(\varphi,\psi)$ determines the
generalized functions $F_1$ uniquely, and if it is translation
invariant, then the same can be told about the generalized
function $F$. Besides, for all functionals
$F_1\in{{\cal S}^{2\nu}}'$ and $F\in{\cal S}'$ the above formulas
define a Hermitian bilinear functional and a translation
invariant Hermitian bilinear functional in~${\cal S}^c$
respectively.}
\medskip
Let us consider a Gaussian generalized random field $X(\varphi)$,
$\varphi\in{\cal S}$, with expectation zero together with its
correlation function $B(\varphi,\psi)=EX\varphi)X(\psi)$, \
$\varphi,\psi\in{\cal S}$. More precisely, let us consider the
complexification $X(\varphi_1+i\varphi_2)=X(\varphi_1)+iX(\varphi_2)$
of this random field and its correlation function
$B(\varphi,\psi)=EX(\varphi)\overline{X(\psi)}$,
$\varphi,\psi\in{\cal S}^c$. This correlation function
$B(\varphi,\psi)$ is a translation invariant Hermitian bilinear
functional in ${\cal S}^c$, hence it can be written in the form
$B(\varphi,\psi)=(F,\varphi*\psi^*)$ with an appropriate
$F\in{\cal S}'$. Moreover, $B(\varphi,\varphi)\ge0$ for all
$\varphi\in{\cal S}^c$, and this means that the generalized
function $F\in{\cal S}'$ corresponding to $B(\varphi,\psi)$ is
positive definite. Hence the Bochner--Schwartz theorem can be applied
for it, and it yields that
$$
EX(\varphi)X(\psi)=\int \widetilde{\varphi*\psi^*}(x)\,G(\,dx)
=\int \tilde\varphi(x)\bar{\tilde\psi}(x)\,G(\,dx) \quad\textrm{for all }
\varphi,\,\psi\in {\cal S}^c
$$
with a uniquely determined, polynomially increasing measure~$G$
on~$R^\nu$. Now we prove with the help of these results Theorem~3B.
\medskip\noindent
{\it Proof of Theorem 3B.}\/ We have already proved
relations~(\ref{(3.2)}) and~(\ref{(3.3)}) with the help of some
results about generalized functions. To complete the proof of
Theorem~3B we still have to show that $G$ is an even measure.
In the proof of this statement we exploit that for a real valued
function $\varphi\in{\cal S}$ the random variable $X(\varphi)$
is also real valued. Hence if $\varphi,\psi\in{\cal S}$, then
$EX(\varphi)X(\psi)=\overline{EX(\varphi)X(\psi)}$. Besides,
$\tilde\varphi(-x)=\bar{\tilde\varphi}(x)$ and
$\tilde\psi(-x)=\bar{\tilde\psi}(x)$ in this case. Hence
\begin{eqnarray*}
\int \tilde\varphi(x)\bar{\tilde\psi}(x)\,G(\,dx)
&=&\int \bar{\tilde\varphi}(x)\tilde\psi(x)\,G(\,dx)\\
&=&\int \tilde\varphi(-x)\bar{\tilde\psi}(-x)\,G(\,dx)
=\int \tilde\varphi(x)\bar{\tilde\psi}(x)\,G^-(\,dx)
\end{eqnarray*}
for all $\varphi,\psi\in{\cal S}$, where $G^-(A)=G(-A)$ for all
$A\in{\cal B}^\nu$. This relation implies that the measures~$G$
and~$G^-$ agree. The proof of Theorem~3B is completed.
\hfill$\qed$
\chapter{Multiple Wiener--It\^o integrals}
In this chapter we define the so-called multiple Wiener--It\^o
integrals, and we prove their most important properties with the
help of It\^o's formula, whose proof is postponed to the next
chapter. More precisely, we discuss in this chapter a modified
version of the Wiener--It\^o integrals with respect to a random
spectral measure rather than with respect to a random measure with
independent increments. This modification makes it necessary to
slightly change the definition of the integral. This modified
Wiener--It\^o integral seems to be a more useful tool than the
original one or the Wick polynomials, because it enables us to
describe the action of shift transformations.
Let $G$ be the spectral measure of a stationary Gaussian field
(discrete or generalized one). We define the following
{\it real}\/ Hilbert spaces $\bar{{\cal H}}_G^n$ and ${\cal H}_G^n$,
$n=1,2,\dots$. We have $f_n\in\bar{{\cal H}}_G^n$ if and only if
$f_n=f_n(x_1,\dots,x_n)$, \ $x_j\in R^\nu$, $j=1,2,\dots,n$, is a
complex valued function of $n$ variables, and
\medskip
\begin{description}
\item[(a)] $f_n(-x_1,\dots,-x_n)=\overline{f_n(x_1,\dots,x_n)}$,
\item[(b)]
$\|f_n\|^2=\int|f_n(x_1,\dots,x_n)|^2G(\,dx_1)\dots G(\,dx_n)<\infty$.
\end{description}
\medskip
Relation~(b) also defines the norm in $\bar{{\cal H}}^n_G$. The
subspace ${\cal H}^n_G\subset\bar{{\cal H}}_G^n$ contains those functions
$f_n\in\bar{{\cal H}}_G^n$ which are invariant under permutations of
their arguments, i.e.
\medskip
\begin{description}
\item[(c)] $f_n(x_{\pi(1)},\dots,x_{\pi(n)}))=f_n(x_1,\dots,x_n)$
for all $\pi\in\Pi_n$, where $\Pi_n$ denotes the group of all
permutations of the set $\{1,2,\dots,n\}$.
\end{description}
\medskip
The norm in ${\cal H}_G^n$ is defined in the same way as in
$\bar{{\cal H}}_G^n$. Moreover, the scalar product is also similarly
defined, namely if $f,\,g\in\bar{{\cal H}}_G^n$, then
\begin{eqnarray*}
(f,g)&=&\int f(x_1,\dots,x_n)\overline{g(x_1,\dots,x_n)}
G(\,dx_1)\dots G(\,dx_n)\\
&=&\int f(x_1,\dots,x_n)g(-x_1,\dots,-x_n)G(\,dx_1)\dots G(\,dx_n).
\end{eqnarray*}
Because of the symmetry $G(A)=G(-A)$ of the spectral measure
$(f,g)=\overline{(f,g)}$, i.e. the scalar product $(f,g)$ is a real
number for all $f,\,g\in\bar{{\cal H}}_G^n$. This means that
$\bar{{\cal H}}_G^n$ is a real Hilbert space.
We also define ${\cal H}_G^0=\bar{{\cal H}}_G^0$ as
the space of real constants with the norm $\|c\|=|c|$. We remark
that $\bar{{\cal H}}_G^n$ is actually the $n$-fold direct product of
${\cal H}_G^1$, while ${\cal H}_G^n$ is the $n$-fold symmetrical direct
product of ${\cal H}^1_G$. Condition~(a) means heuristically that
$f_n$ is the Fourier transform of a real valued function.
Finally we define the so-called Fock space $\textrm{Exp\,}{\cal H}_G$
whose elements are sequences of functions $f=(f_0,f_1,\dots)$,
$f_n\in{\cal H}_G^n$ for all $n=0,1,2,\dots$, such that
$$
\|f\|^2=\sum_{n=0}^\infty \frac1{n!}\|f_n\|^2<\infty.
$$
Given a function $f\in\bar{{\cal H}}^n_G$ we define
$\textrm{Sym}\, f$ as
$$
\textrm{Sym}\, f(x_1,\dots,x_n)=\frac1{n!}\sum_{\pi\in\Pi_n}
f(x_{\pi(1)},\dots,x_{\pi(n)}).
$$
Clearly, $\textrm{Sym}\, f\in{\cal H}_G^n$, and
\begin{equation}
\|\textrm{Sym}\, f\|\le \|f\|. \label{(4.1)}
\end{equation}
\index{Fock space}
Let $Z_G$ be a Gaussian random spectral measure corresponding
to the spectral measure~$G$ on a probability space
$(\Omega,{\cal A},P)$. We shall define the $n$-fold
Wiener--It\^o integrals
$$
I_G(f_n)=\frac1{n!}\int f_n(x_1,\dots,x_n)Z_G(\,dx_1)\dots Z_G(\,dx_n)
\quad \textrm{for } f_n\in\bar{{\cal H}}_G^n
$$
and
$$
I_G(f)=\sum_{n=0}^\infty I_G(f_n)\quad \textrm{for }
f=(f_0,f_1,\dots)\in\textrm{Exp}\,{\cal H}_G.
$$
We shall see that $I_G(f_n)=I_G(\textrm{Sym}\, f_n)$ for all
$f_n\in\bar{{\cal H}}_G^n$. Therefore, it would have been
sufficient to define the Wiener--It\^o integral only for
functions in ${\cal H}_G^n$. Nevertheless, some arguments
become simpler if we work in $\bar{{\cal H}}_G^n$. In the
definition of Wiener--It\^o integrals first we restrict
ourselves to the case when the spectral measure is
non-atomic, i.e. $G(\{x\})=0$ for all $x\in R^\nu$. This
condition is satisfied in all interesting cases. However,
we shall later show how one can get rid of this restriction.
First we introduce the notion of regular systems for some
collections of subsets of $R^\nu$, define a subclass
$\hat{\bar{{\cal H}}}_G^n\subset\bar{{\cal H}}_G^n$ of simple
functions with their help, and define the Wiener--It\^o
integrals for the functions of this subclass.
\medskip\noindent
{\bf Definition of regular systems and the class of simple
functions.} {\it Let
$${\cal D}=\{\Delta_j,\;j=\pm1,\pm2,\dots,\pm N\}
$$
be a finite collection of bounded, measurable sets in $R^\nu$
indexed by the integers $\pm1,\dots,\pm N$. We say that
${\cal D}$ is a regular system if $\Delta_j=-\Delta_{-j}$, and
$\Delta_j\cap\Delta_l=\emptyset$ if $j\neq l$ for all
$j,l=\pm1,\pm2,\dots,\pm N$.
\index{regular system of sets in $R^\nu$}
A function $f\in\bar{{\cal H}}_G^n$ is adapted to this system
${\cal D}$ if $f(x_1,\dots,x_n)$ is constant on the sets
$\Delta_{j_1}\times\Delta_{j_2}\times\cdots\times\Delta_{j_n}$, \
$j_l=\pm1,\dots,\pm N$, $l=1,2,\dots,n$, it vanishes outside
these sets and also on the sets for which $j_l=\pm j_{l'}$
for some $l\neq l'$.
A function $f\in\bar{{\cal H}}_G^n$ is in the class
$\hat{\bar{{\cal H}}}_G^n$ of simple functions,
and a (symmetric) function $f\in{\cal H}_G^n$ is in
the class $\hat{{\cal H}}_G^n$ of simple symmetric
functions if it is adapted to some regular system
${\cal D}=\{\Delta_j,\;j=\pm1,\dots,\pm N\}$.}
\index{simple function}
\medskip\noindent
{\bf Definition of Wiener--It\^o integral of simple functions.}
{\it
Let a simple function $f\in\hat{\bar{{\cal H}}}_G^n$ be adapted to
some regular systems ${\cal D}=\{\Delta_j,\;j\pm1,\dots,\pm N\}$.
Its Wiener--It\^o integral with respect to the random spectral
measure $Z_G$ is defined as
\begin{eqnarray}
&&\int f(x_1,\dots,x_n)Z_G(\,dx_1)\dots Z_G(\,dx_n)
\label{(4.2)} \\
&&\qquad =n!I_G(f)=\sum_{\substack{j_l=\pm1,\dots,\pm N\\ l=1,2,\dots,n}}
f(x_{j_1},\dots,x_{j_n})Z_G(\Delta_{j_1})\cdots Z_G(\Delta_{j_n})
\nonumber,
\end{eqnarray}
where $x_{j_l}\in\Delta_{j_l}$, $j_l=\pm1,\dots,\pm N$, $l=1,\dots,n$.}
\medskip
We remark that although the regular system ${\cal D}$ to which
$f$ is adapted, is not uniquely determined (the elements of
${\cal D}$ can be divided to smaller sets), the integral
defined in~(\ref{(4.2)}) is meaningful, i.e. it does not
depend on the choice of ${\cal D}$. This can be seen by
observing that a refinement of a regular system ${\cal D}$
to which the function $f$ is adapted yields the same value
for the sum defining $n!I_G(f)$ in formula~(\ref{(4.2)}) as
the original one. This follows from the additivity of the
random spectral measure $Z_G$ formulated in its
property~(iv), since this
implies that each term
$f(x_{j_1},\dots,x_{j_n})Z_G(\Delta_{j_1})\cdots Z_G(\Delta_{j_n})$
in the sum at the right-hand side of formula~(\ref{(4.2)})
corresponding to the original regular system equals the sum
of all such terms $f(x_{j_1},\dots,x_{j_n})
Z_G(\Delta'_{j'_1})\cdots Z_G(\Delta'_{j'_n})$ in the sum
corresponding to the refined partition for which
$\Delta'_{j'_1}\times\cdots\times\Delta'_{j'_n}\subset
\Delta_{j_1}\times\cdots\times\Delta_{j_n}$.
By property~(vii) of the random spectral measures all products
$$
Z_G(\Delta_{j_1})\cdots Z_G(\Delta_{j_n})
$$
with non-zero coefficient in~(\ref{(4.2)}) are products of
independent random variables. We had this property in mind
when requiring the condition that the function $f$ vanishes on
a product $\Delta_{j_1}\times\cdots\times\Delta_{j_n}$ if
$j_l=\pm j_{l'}$ for some $l\neq l'$. This condition is
interpreted in the literature as discarding the hyperplanes
$x_l=x_{l'}$ and $x_l=-x_{l'}$, \ $l,l'=1,2,\dots,n$,
$l\neq l'$, from the domain of integration. (Let us observe
that in this case, --- unlike to the definition of the original
Wiener--It\^o integrals discussed in Chapter~7, --- we omitted
also the hyperplanes $x_l=-x_{l'}$ and not only the hyperplanes
$x_l=x_{l}$. $l\neq l'$, from the domain of integration.)
Property~(a) of the functions in $\bar{{\cal H}}_G^n$
and property~(v) of the random spectral measures imply that
$I_G(f)=\overline{I_G(f)}$, i.e. $I_G(f)$ is a real valued
random variable for all $f\in\hat{\bar{{\cal H}}}_G^n$.
The relation
\begin{equation}
EI_G(f)=0, \quad \textrm{for }f\in\hat{\bar{{\cal H}}}_G^n,
\quad n=1,2,\dots \label{(4.3)}
\end{equation}
also holds. Let
$\hat{{\cal H}}_G^n={\cal H}_G^n\cap\hat{\bar{{\cal H}}}_G^n$.
If $f\in\hat{\bar{{\cal H}}}_G^n$, then
$\textrm{Sym}\, f\in\hat{{\cal H}}_G^n$, and
\begin{equation}
I_G(f)=I_G(\textrm{Sym}\, f). \label{(4.4)}
\end{equation}
Relation~(\ref{(4.4)}) follows immediately from the observation that
$Z_G(\Delta_{j_1})\cdots Z_G(\Delta_{j_n})=Z_G(\Delta_{\pi(j_1)})\cdots
Z_G(\Delta_{\pi(j_n)})$ for all $\pi\in\Pi_n$.
We also claim that
\begin{equation}
EI_G(f)^2\le\frac1{n!}\|f\|^2 \quad\textrm {for \ }
f\in\hat{\bar{{\cal H}}}_G^n, \label{(4.5)}
\end{equation}
and
\begin{equation}
EI_G(f)^2=\frac1{n!}\|f\|^2 \quad\textrm {for \ } f\in\hat{{\cal H}}_G^n.
\label{($4.5'$)}
\end{equation}
Because of~(\ref{(4.1)}) and~(\ref{(4.4)}) it is enough to
check~(\ref{($4.5'$)}).
Let ${\cal D}$ be a regular system of sets in $R^\nu$,
$j_1,\dots,j_n$ and $k_1,\dots,k_n$ be indices such that
$j_l\neq\pm j_{l'}$, $k_l\neq\pm k_{l'}$ if $l\neq l'$. Then
\begin{eqnarray*}
&&EZ_G(\Delta_{j_1})\cdots Z_G(\Delta_{j_n})
\overline{Z_G(\Delta_{k_1})\cdots Z_G(\Delta_{k_n})}\\
&&\qquad\qquad=\left\{
\begin{array}{l}
G(\Delta_{j_1})\cdots G(\Delta_{j_n}) \quad\textrm{ if \ }
\{j_1,\dots,j_n\}=\{k_1,\dots,k_n\}, \\
0 \quad \textrm{otherwise.}
\end{array} \right.
\end{eqnarray*}
To see the last relation one has to observe that the product on the
left-hand side can be written as a product of independent random
variables because of property~(vii) of the random spectral measures.
If $\{j_1,\dots,j_n\}\neq\{k_1,\dots,k_n\}$, then there is an index~$l$
such that either $j_l\neq\pm k_{l'}$ for all $1\le l'\le n$, or there
exists an index $l'$, $1\le l'\le n$, such that $j_l=-k_{l'}$. In the
first case $Z_G(\Delta_{j_l})$ is independent of the remaining
coordinates of the vector
$(Z_G(\Delta_{j_1}),\dots,Z_G(\Delta_{j_n}),
\overline{Z_G(\Delta_{k_1})},\dots,\overline{Z_G(\Delta_{k_n})})$,
and $EZ_G(\Delta_{j_l})=0$. Hence the expectation of the investigated
product equals zero, as we claimed. If ${j_l}=-k_{l'}$ with some index
$l'$, then a different argument is needed, since $Z_G(\Delta_{j_l})$
and $Z_G(-\Delta_{j_l})$ are not independent. In this case we can
state that since $j_p\neq\pm j_l$ if $p\neq l$, and
$k_q\neq\pm j_l$ if $q\neq l'$, the vector
$(Z_G(\Delta_{j_l}),Z_G(-\Delta_{j_l}))$ is independent of the
remaining coordinates of the above random vector. On the other hand,
the product $Z_G(\Delta_{j_l})\overline{Z_G(-\Delta_{j_l}})$
has zero expectation, since
$EZ_G(\Delta_{j_l})\overline{Z_G(-\Delta_{j_l})}
=G(\Delta_{j_l}\cap(-\Delta_{j_l}))=0$ by property~(iii) of the
random spectral measures and the relation
$\Delta_{j_l}\cap(-\Delta_{j_l})=\emptyset$. Hence the expectation
of the considered product equals zero also in this case. If
$\{j_1,\dots,j_n\}=\{k_1,\dots,k_n\}$, then
$$
EZ_G(\Delta_{j_1})\cdots Z_G(\Delta_{j_n})
\overline{Z_G(\Delta_{k_1})\cdots Z_G(\Delta_{k_n})}
=\prod\limits_{l=1}^n EZ_G(\Delta_{j_l})
\overline{Z_G(\Delta_{j_l})}=\prod\limits_{l=1}^n G(\Delta_{j_l}).
$$
Therefore for a function $f\in\hat{{\cal H}}_G^n$
\begin{eqnarray*}
EI_G(f)^2&&=\left(\frac1n\right)^2\sum\sum f(x_{j_1},\dots,x_{j_n})
\overline{f(x_{k_1},\dots,x_{k_n})} \\
&&\qquad\qquad\qquad EZ_G(\Delta_{j_1})\cdots Z_G(\Delta_{j_n})
\overline{Z_G(\Delta_{k_1})\cdots Z_G(\Delta_{k_n})}\\
&&=\left(\frac1{n!}\right)^2\sum |f(x_{j_1},\dots,x_{j_n})|^2
G(\Delta_{j_1})\cdots G(\Delta_{j_n}) \cdot n! \\
&&=\frac1{n!}\int |f(x_1,\dots,x_n)|^2G(\,dx_1)\cdots G(\,dx_n)
=\frac1{n!}\|f\|^2.
\end{eqnarray*}
We claim that Wiener--It\^o integrals of different order are
uncorrelated. More explicitly, take two functions
$f\in\hat{\bar{{\cal H}}}^n_G$ and $f'\in\hat{\bar{{\cal H}}}^{n'}_G$
such that $n\neq n'$. Then we have
\begin{equation}
EI_G(f)I_G(f')=0 \quad \textrm{if \ }f\in \hat{\bar{{\cal H}}}^n_G, \;\;
f'\in\hat{\bar{{\cal H}}}^{n'}_G, \textrm{ and \ } n\neq n'.
\label{(4.6)}
\end{equation}
To see this relation observe that a regular system ${\cal D}$ can be
chosen is such a way that both $f$ and $f'$ are adapted to it.
Then a similar, but simpler argument as the previous one shows that
$$
EZ_G(\Delta_{j_1})\cdots Z_G(\Delta_{j_n})
\overline{Z_G(\Delta_{k_1})\cdots Z_G(\Delta_{k_{n'}})}=0
$$
for all sets of indices $\{j_1,\dots,j_n\}$ and $\{k_1,\dots,k_{n'}\}$
if $n\neq n'$, hence the sum expressing $EI_G(f)I_G(f')$ in this case
equals zero.
\medskip
We extend the definition of Wiener--It\^o integrals to a
more general class of kernel functions with the help of
the following Lemma~4.1. This is a simple result, but
unfortunately it contains several small technical
details which make its reading unpleasant.
\medskip\noindent
{\bf Lemma 4.1.} {\it The class of simple functions
$\hat{\bar{{\cal H}}}_G^n$ is dense in the (real) Hilbert space
$\bar{{\cal H}}_G^n$, and the class of symmetric simple function
$\hat{{\cal H}}_G^n$ is dense in the (real) Hilbert space
${\cal H}_G^n$.}
\medskip\noindent
{\it Proof of Lemma 4.1.}\/
It is enough to show that $\hat{\bar{{\cal H}}}_G^n$ is dense
in the Hilbert space $\bar{{\cal H}}_G^n$, since the second
statement of the lemma follows from it by a standard
symmetrization procedure.
First we reduce the result of Lemma~4.1 to a {\it Statement~A}\/
and then to a {\it Statement~B}. Finally we prove {\it Statement~B}.
In {\it Statement~A}\/ we claim that the indicator function
$\chi_A$ of a bounded set $A\in{\cal B}^{n\nu}$ such that
$A=-A$ can be well approximated by a function of the form
$g=\chi_B\in\hat{\bar{{\cal H}}}_G^n$, where $\chi_B$ is the
indicator function of an appropriate set~$B$. Actually we
formulate {\it Statement~A}\/ in a more complicated form, because
only in such a way can we reduce the statement about the
good approximability of a general, possibly complex valued
function~$f\in\bar{{\cal H}}_G^n$ by a function in
$g\in\hat{\bar{{\cal H}}}_G^n$ to {\it Statement~A}.
\medskip\noindent
{\it Statement A.}\/ Let $A\in{\cal B}^{n\nu}$ be a bounded,
symmetric set, i.e. let $A=-A$. Then for any $\varepsilon>0$
there is a function $g\in\hat{\bar{{\cal H}}}_G^n$ such that
$g=\chi_B$ with some set $B\in{\cal B}^{n\nu}$, i.e. $g$~is
the indicator function of a set~$B$, such that the inequality
$\|g-\chi_A\|<\varepsilon$ holds with the norm of the
space $\bar{{\cal H}}_G^n$. (Here $\chi_A$ denotes the
indicator function of the set~$A$, and we have
$\chi_A\in\bar{{\cal H}}_G^n$.)
If the set $A$ can be written in the form $A=A_1\cup (-A_1)$
with such a set $A_1$ for which the sets $A_1$ and $-A_1$ have
a positive distance from each other, i.e.
$\rho(A_1,-A_1)=\inf\limits_{x\in A_1,\,y\in -A_1}\rho(x,y)>\delta$,
with some $\delta>0$, where $\rho$ denotes the Euclidean
distance in $R^{n\nu}$, then a good approximation of
$\chi_A$ can be given with such a function
$g=\chi_{B\cup(-B)}\in\hat{\bar{{\cal H}}}_G^n$ for which
the sets $B$ and $-B$ are separated from each other. More
explicitly, for all $\varepsilon>0$ there is a set
$B\in{\cal B}^{n\nu}$ such that
$B\subset A_1^{\delta/2}=\{x\colon\; \rho(x,A_1)\le\frac\delta2\}$,
$g=\chi_{B\cup(-B)}\in\hat{\bar{{\cal H}}}_G^n$, and
$G^n(A_1\,\Delta\,B)<\frac\varepsilon2$. Here $A\Delta B$
denotes the symmetric difference of the sets $A$ and $B$,
and $G^n$ is the $n$-fold direct product of the spectral
measure~$G$ on the space $R^{n\nu}$. (The above
properties of the set~$B$ imply that the function
$g=\chi_{B\cup(-B)}\in\hat{\bar{{\cal H}}}_G^n$ satisfies
the relation $\|g-\chi_A\|<\varepsilon$.)
\medskip
To justify the reduction of Lemma~4.1 to {\it Statement~A}\/
let us observe that if two functions $f_1\in\bar{{\cal H}}_G^n$
and $f_2\in\bar{{\cal H}}_G^n$ can be arbitrarily well
approximated by functions from $\hat{\bar{{\cal H}}}_G^n$
in the norm of this space, then the same relation
holds for any linear combination $c_1f_1+c_2f_2$ with real
coefficients~$c_1$ and~$c_2$. (If the functions $f_i$ are
approximated by some functions $g_i\in\hat{\bar{{\cal H}}}_G^n$,
$i=1,2$, then we may assume, by applying some refinement of the
partitions if it is necessary, that the approximating functions
$g_1$ and $g_2$ are adapted to the same regular partition.)
Hence the proof about the arbitrarily good approximability of a
function $f\in\bar{{\cal H}}_G^n$ by functions
$g\in\hat{\bar{{\cal H}}}_G^n$ can be reduced to the proof
about the arbitrarily good approximability of its real part
$\textrm{Re}\, f\in\bar{{\cal H}}_G^n$ and its imaginary part
$\textrm{Im}\, f\in\bar{{\cal H}}_G^n$. Moreover, since the
real part and imaginary part of the function~$f$ can be
arbitrarily well approximated by such real or imaginary valued
functions from the space $\bar{{\cal H}}_G^n$ which take only
finitely many values, the desired approximation result can be
reduced to the case when $f$ is the indicator function of a
set $A\in{\cal B}^{n\nu}$ such that $A=-A$ (if $f$ is real
valued), or it takes three values, the value $i$ on a set
$A_1\in {\cal B}^{n\nu}$, the value~$-i$ on the set $-A_1$,
and it equals zero on $R^{n\nu}\setminus(A_1\cup(-A_1))$
(if $f$ is purely imaginary valued). Besides, the
inequalities $G^n(A)<\infty$ and $G^n(A_1)<\infty$ hold. We
may even assume that $A$ and $A_1$ are bounded sets, because
$G^n(A)=\lim\limits_{K\to\infty}G^n(A\cap[-K,K]^{n\nu})$, and
the same argument applies for~$A_1$.
{\it Statement~A}\/ immediately implies the desired approximation
result in the first case when $f$ is the indicator function
of a set~$A$ such that $A=-A$. In the second case, when such a
function~$f$ is considered that takes the values $\pm i$ and
zero, observe that the sets $A_1=\{x\colon\; f(x)=i\}$ and
$-A_1=\{x\colon\; f(x)=-i\}$ are disjoint. Moreover, we may assume
that they have positive distance from each other, because there
are such compact sets $K_N\subset A_1$, $N=1,2,\dots$, for which
$\lim\limits_{N\to\infty} G^n(A\setminus (K_N\cup(-K_N))=0$, and
the two disjoint compact sets $K_N$ and $-K_N$ have positive
distance. This enables us to restrict our attention to the
approximation of such functions $f$ for which
$A_1=\{x\colon\; f(x)=i\}=K_N$ and
$-A_1=\{x\colon\; f(x)=-i\}=-K_N$ with one of the above defined
sets $K_N$ with a sufficiently large index $N$. To get a
good approximation in this case, take $A_1=K_N$ and apply the
second part of {\it Statement A}\/ for the indicator
function $\chi_A=\chi_{K_N\cup (-K_N)}$ with the choice
$A_1=K_N$. We get that there exists a function
$g=\chi_{B\cup(-B)}\in\hat{\bar{{\cal H}}}_G^n$ such that
$B\subset A_1^{\delta/2}$ with a number $\delta>0$ for which
the relation $\rho(K_N,-K_N)>\delta$ holds, and
$G^n(A_1\,\Delta\,B)<\frac\varepsilon2$. Then we define with
the help of the above set $B$ the function
$\bar g\in\hat{\bar{{\cal H}}}_G^n$ as
$\bar g(x)=i$ if $x\in B$, $\bar g(x)=-i$ if $x\in-B$ and
$\bar g(x)=0$ otherwise. The definition of the function
$\bar g(\cdot)$ is meaningful, since $B\cap(-B)=\emptyset$,
and it yields a sufficiently good approximation of the
function $f(\cdot)$.
\medskip
In the next step we reduce the proof of {\it Statement~A}\/
to the proof of a result called {\it Statement~B}. We show
that to prove {\it Statement~A}\/ it is enough to prove the
good approximability of some very special (and relatively
simple) indicator functions $\chi_B\in\bar{{\cal H}}_G^n$
by a function $g\in\hat{\bar{{\cal H}}}_G^n$.
\medskip\noindent
{\it Statement B.}\/ Let $B=D_1\times\cdots\times D_n$ be
the direct product of bounded sets $D_j\in{{\cal B}}^\nu$
such that $D_j\cap(-D_j)=\emptyset$ for all $1\le j\le n$.
Then for all $\varepsilon>0$ there is a set
$F\subset B\cup(-B)$, $F\in{\cal B}^{n\nu}$ such that
$\chi_F\in\hat{\bar{{\cal H}}}_G^n$, and
$\|\chi_{B\cup(-B)}-\chi_F\|\le\varepsilon$, with the norm
of the space $\bar{{\cal H}}_G^n$.
\medskip
To deduce {\it Statement~A}\/ from {\it Statement~B}\/ let us
first remark that we may reduce our attention to such sets~$A$
in {\it Statement~A}\/ for which all coordinates of the points
in the set~$A$ are separated from the origin. More explicitly,
we may assume the existence of a number $\eta>0$ with the
property $A\cap K(\eta)=\emptyset$, where
$K(\eta)=\bigcup\limits_{j=1}^n K_j(\eta)$ with
$K_j(\eta)=\{(x_1,\dots,x_n)\colon\; x_l\in R^\nu,\;l=1,\dots,n,\;
\rho(x_j,0)\le\eta\}$. To see our right to make such a reduction
observe that the relation $G(\{0\})=0$ implies that
$\lim\limits_{\eta\to0}G^n(K(\eta))=0$, hence
$\lim\limits_{\eta\to0}G^n(A\setminus K(\eta))=G^n(A)$. At this point
we exploited a weakened form of the non-atomic property of the
spectral measure~$G$, namely the relation~$G(\{0\})=0$.
\medskip
First we formulate a result which we prove somewhat later,
and show that the proof of {\it Statement~A}\/ can be
reduced to that of {\it Statement~B}\/ with its help. We
claim that for all numbers~$\varepsilon>0$, $\bar\delta>0$
and bounded sets $A\in{\cal B}^{n\nu}$ such that $A=-A$, and
$A\cup K(\eta)=\emptyset$ there is a finite sequence of
bounded sets $B_j\in{\cal B}^{n\nu}$, $j=\pm1,\dots,\pm N$,
with the following properties. The sets $B_j$ are disjoint,
$B_{-j}=-B_j$, $j=\pm1,\dots,\pm N$, each set $B_j$ can be
written in the form $B_j=D^{(j)}_1\times\cdots\times D^{(j)}_n$
with $D^{(j)}_k\in{\cal B}^{\nu}$, and
$D^{(-j)}_k\cap(-D^{(j)}_k)=\emptyset$ for all
$1\le j\le N$ and $1\le k\le n$, the diameter
$d(B_j)=\sup\{\rho(x,y)\colon\;x,y\in B_j\}$ of the
sets $B_j$ has the bound $d(B_j)\le\bar\delta$ for all
$1\le j\le N$, and finally the set
$B=\bigcup\limits_{j=1}^N(B_j\cup B_{-j})$ satisfies the
relation $G^n(A\Delta B)\le\varepsilon$.
Indeed, since we can choose $\varepsilon>0$ arbitrarily small,
the above result together with the application of
{\it Statement~B}\/ for all functions $\chi_{B_j,\cup(-B_j)}$,
$1\le j\le N$, supplies an arbitrarily good approximation of
the function $\chi_A$ by a function of the form
$\sum\limits_{j=1}^N\chi_{F_j}\in\hat{\bar{{\cal H}}}^n_G$
in the norm of the space $\bar{{\cal H}}_G^n$. Moreover, the
random variable
$\sum\limits_{j=1}^N\chi_{F_j}\in\hat{\bar{{\cal H}}}^n_G$
agrees with the indicator function of the set
$\bigcup\limits_{j=1}^N F_j$, since the sets $B_j$,
$j=\pm1,\dots,\pm N$, are disjoint, and
$F_j\subset B_j\cup B_{-j}$.
If the set $A$ can be written in the form $A=A_1\cup(-A_1)$
such that $\rho(A_1,-A_1)>\delta$, then we can make the
same construction with the only modification that this
time we demand that the sets $B_j$ satisfy the relation
$d(B_j)\le\bar\delta$ with some $\bar\delta<\frac\delta2$
for all $1\le j\le N$. We may assume that
$A\cap(B_j\cup B_{-j})\neq\emptyset$ for all indices~$j$,
since we can omit those sets $B_j\cup B_{-j}$ which do not
have this property. Since $d(B_j)<\frac\delta2$, a set
$B_j$ cannot intersect both $A_1$ and $-A_1$. By an
appropriate indexation of the sets $B_j$ we have
$B_j\subset A_1^{\delta/2}$ and $B_{-j}\subset(-A_1)^{\delta/2}$
for all $1\le j\le N$. Then the set
$B=\bigcup\limits_{j=1}^N (B_j\cap F_j)$ and the function
$g=\chi_{B\cup(-B)}$ satisfy the second part of
{\it Statement A}.
To find a sequence~$B_j$, $j=\pm1,\dots,\pm N$, for a set~$A$
such that $A=-A$, and $A\cup K(\eta)=\emptyset$ with the
properties needed in the above argument observe that there
is a sequence of finitely many bounded sets $B_j$ of the
form $B_j=D^{(j)}_1\times\cdots\times D^{(j)}_n$,
$D^{(j)}_l\in{\cal B}^\nu$, whose union $B=\bigcup B_j$
satisfies the relation $G^n(A\,\Delta\,B)<\frac\varepsilon2$.
Because of the symmetry property $A=-A$ of the set~$A$ we may
assume that these sets $B_j$ have such an indexation with both
positive and negative integers for which $B_j=-B_{-j}$. We may
also demand that $B_j\cap A\neq\emptyset$ for all sets~$B_j$.
Besides, we may assume, by dividing the sets $D^{(j)}_l$
appearing in the definition of the sets~$B_j$ into smaller sets
if this is needed that their diameter
$d(D^{(j)}_l)<\max(\frac\eta2,\frac{\bar\delta}n)$.
This implies because of the relation $A\cap K(\eta)=\emptyset$
that $D^{(j)}_l\cap(-D^{(j)}_l)=\emptyset$ for all $j$ and
$1\le l\le n$. The above constructed sets $B_j$ may be not
disjoint, but with the help of their appropriate further
splitting and a proper indexation of the sets obtained in
such a way we get such a partition of the set $B$ which
satisfies all conditions we demanded. For the sake of
completeness we present a partition of the set $B$ with
the properties we need.
Let us first take the following partition of $R^\nu$ for
all $1\le l\le n$ with the help of the sets $D^{(j)}_l$,
$1\le j\le N$. For a fixed number~$l$ this partition
consists of all sets $\bar D^{(l)}_r$ of the form
$\bar D^{(l)}_r=\bigcap\limits_{1\le j\le N} F^{r(j)}_{l,j}$,
where the indices $r$ are sequences $(r(1),\dots,r(N))$
of length~$N$ with $r(j)=1,2$ or~3, $1\le j\le N$, and
$F^{(1)}_{l,j}=D^{(j)}_l$, $F^{(2)}_{l,j}=-D^{(j)}_l$,
$F^{(3)}_{l,j}=R^\nu\setminus(D^{(j)}_l\cup(-D^{(j)}_l))$. Then
$B$ can be represented as the union of those sets of the
form $\bar D^{(1)}_{r_1}\times\cdots\times\bar D^{(n)}_{r_n}$
which are contained in~$B$.
\medskip\noindent
{\it Proof of statement B.} To prove this result we show
that for all $\bar\varepsilon>0$ there is a regular system
${\cal D}=\{\Delta_l,\,l=\pm1,\dots,\pm N\}$ such that all
sets $D_j$ and $-D_j$, $1\le j\le n$, appearing in the
formulation of {\it Statement~B}\/ can be expressed as
the union of some elements $\Delta_l$ of ${\cal D}$, and
$G(\Delta_l)\le\bar\varepsilon$ for all $\Delta_l\in{\cal D}$.
First we prove a weakened version of this statement. We show
that there is a regular system
$\bar{{\cal D}}=\{\Delta'_l,\, l=\pm1,\dots,\pm N'\}$ such
that all sets $D_j$ and $-D_j$ can be expressed as the union
of some sets $\Delta'_l$ of $\bar{{\cal D}}$. But we have no
control on the measure~$G(\Delta'_l)$ of the elements of this
regular system. To get such a regular system we define the sets
$\Delta'(\varepsilon_s,\,1\le |s|\le n)
=D^{\varepsilon_1}_1\cap(-D_1)^{\varepsilon_{-1}}
\cap\cdots\cap D^{\varepsilon_n}_n\cap(-D_n)^{\varepsilon_{-n}}$
for all vectors $(\varepsilon_s,\,1\le|s|\le n)$ such that
$\varepsilon_s=\pm1$ for all $1\le |s|\le n$, and the vector
$(\varepsilon_s,\,1\le |s|\le n)$ contains at least one
coordinate~$+1$, and $D^1=D$, $D^{-1}=R^\nu\setminus D$ for
all sets $D\in{\cal B}^\nu$. Then taking an appropriate
reindexation of the sets
$\Delta'(\varepsilon_s,\,1\le|s|\le n)$ we get a regular
system $\bar{{\cal D}}$ with the desired properties. (In this
construction the sets $\Delta'(\varepsilon_s,\,1\le|s|\le n)$
are disjoint, and during their reindexation we drop those
of them which equal the empty set.) To see that
$\bar{{\cal D}}$ with a good indexation is a
regular system observe that for a set
$\Delta'_l=\Delta'(\varepsilon_s,\,1\le|s|\le n)\in\bar{{\cal D}}$
we have
$-\Delta'_l=\Delta'(\varepsilon_{-s},\,1\le|s|\le n)\in\bar{{\cal D}}$,
and $\Delta'_l\cap(-\Delta'_l)\subset D_j\cap(-D_j)=\emptyset$
with some index $1\le j\le n$. (We had to exclude the
possibility $\Delta_l=-\Delta_l$.)
Next we show that by appropriately refining the above regular
system $\bar{{\cal D}}$ we can get such a regular system
${\cal D}=\{\Delta_l,\,l=\pm1,\dots,\pm N\}$ which satisfies also
the property $G(\Delta_l)\le\bar\varepsilon$ for all $\Delta_l\in{\cal D}$.
To show this let us observe that there is a finite partition
$\{E_1,\dots,E_l\}$ of $\bigcup\limits_{j=1}^n(D_j\cup(-D_j))$ such that
$G(E_j)\le\bar\varepsilon$ for all $1\le j \le l$. Indeed, the closure
of $D=\bigcup\limits_{j=1}^n(D_j\cup(- D_j))$ can be covered by open
sets $H_i\subset R^\nu$ such that $G(H_i)\le\bar\varepsilon$ for all sets
$H_i$ because of the non-atomic property of the measure~$G$, and
by the Heyne--Borel theorem this covering can be chosen finite.
With the help of these sets $H_i$ we can get a partition
$\{E_1,\dots,E_l\}$ of $\bigcup\limits_{j=1}^n(D_j\cup(-D_j))$
with the desired properties.
Then we can make the following construction with the help of the
above sets~$E_j$ and $\Delta'_l$. Take a pair of elements
$(\Delta'_l,\Delta'_{-l})=(\Delta'_l,-\Delta'_l)$,
of $\bar{{\cal D}}$, and split up the
set $\Delta'_l$ with the help of the sets~$E_j$
to the union of finitely many disjoint sets of the form
$\Delta_{l,j}=\Delta'_l\cap E_j$. Then
$G(\Delta_{l,j})<\bar\varepsilon$ for all sets $\Delta_{l,j}$, and
we can write the set $\Delta'_{-l}$ as the union of the
disjoint sets $-\Delta_{l,j}$. By applying this procedure for all
pairs $(\Delta'_l,\Delta'_{-l})$ and by reindexing the sets
$\Delta_{l,j}$ obtained by this procedure in an appropriate way we
get a regular system ${\cal D}$ with the desired properties.
Let us write $B\cup(-B)$ as the union of products of sets of
the form $\Delta_{l_1}\times\cdots\times\Delta_{l_n}$ with sets
$\Delta_{l_j}\in{\cal D}$, $1\le j\le n$, and let us discard those
products for which $l_j=\pm l_{j'}$ for some pair $(j,\,j')$,
$j\neq j'$. We define the set $F$ about which we claim that it
satisfies Property~B as the union of the remaining sets
$\Delta_{l_1}\times\cdots\times\Delta_{l_n}$. Then
$\chi_F\in\hat{\bar{{\cal H}}}_G^n$. Hence to prove that
{\it Statement~B}\/ holds with this set~$F$ if
$\bar\varepsilon>0$ is chosen sufficiently small it is enough
to show that the sum of the terms
$G(\Delta_{l_1})\cdots G(\Delta_{l_n})$ for which $l_j=\pm l_{j'}$
with some $j\neq j'$ is less than $n^2\bar\varepsilon M^{n-1}$, where
$M=\max G(D_j\cup(-D_j))=2\max G(D_j)$. To see this
observe that for a fixed pair $(j,j')$, $j\neq j'$, the sum
of all products $G(\Delta_{l_1})\cdots G(\Delta_{l_n})$ such that
$l_j=l_{j'}$ can be bounded by $\bar\varepsilon M^{n-1}$, and the same
estimate holds if summation is taken for products with the
property $l_j=-l_{j'}$. Indeed, each term of this sum can be
bounded by
$\bar\varepsilon G^{n-1}
\left(\prod\limits_{1\le p\le n,\,p\neq j}\Delta_{l_p}\right)$, and
the events whose $G^{n-1}$ measure is considered in the
investigated sum are disjoint. Beside this their union is in the
product set $\prod\limits_{1\le p\le n,\,p\neq j}(D_p\cup D_{-p})$,
whose measure is bounded by $M^{n-1}$. Lemma 4.1 is proved
\hfill$\qed$
\medskip
As the transformation $I_G(f)$ is a contraction from
$\hat{\bar{{\cal H}}}_G^n$ into $L_2(G,{\cal A},P)$, it can uniquely
be extended to the closure of $\hat{\bar{{\cal H}}}_G^n$, i.e. to
$\bar{{\cal H}}_G^n$. We define the $n$-fold Wiener--It\^o integral
in the general case via this extension.
\index{Wiener--It\^o integral}
The expression $I_G(f)$ is a real valued random variable for
all $f\in\bar{{\cal H}}_G^n$, and relations~(\ref{(4.3)}),
(\ref{(4.4)}), (\ref{(4.5)}), (\ref{($4.5'$)}) and~(\ref{(4.6)})
remain valid for $f,\,f'\in\bar{{\cal H}}_G^n$ or
$f\in{\cal H}_G^n$ instead of
$f,\,f'\in\hat{\bar{{\cal H}}}_G^n$ of $f\in\hat{{\cal H}}_G^n$.
Relations~(\ref{($4.5'$)}) and~(\ref{(4.6)}) imply that the
transformation
$I_G\colon\; \textrm{Exp}\,{{\cal H}}_G\to L^2(\Omega,{\cal A},P)$
is an isometry. We shall show that also the following result holds.
\medskip\noindent
{\bf Theorem 4.2.} {\it Let a stationary Gaussian random field
be given (discrete or generalized one), and let $Z_G$ denote the
random spectral measure adapted to it. If we integrate with respect
to this $Z_G$, then the transformation
$I_G\colon\; \textrm{\rm Exp}\,{{\cal H}}_G\to {\cal H}$ is
unitary. The transformation
$(n!)^{1/2}I_G\colon\; {\cal H}_G^n\to{\cal H}_n$ is also unitary.}
\medskip
In the proof of Theorem~4.2 we need an identity whose proof is
postponed to the next chapter.
\medskip\noindent
{\bf Theorem 4.3. (It\^o's formula.)}
\index{It\^o's formula}
{\it Let $\varphi_1,\dots,\varphi_m$, \
$\varphi_j\in{\cal H}_G^1$, $1\le j\le m$, be an orthonormal
system in $L_G^2$. Let some positive integers $j_1,\dots,j_m$
be given, and let $j_1+\cdots+j_m=N$. Define for all
$i=1,\dots,N$ the function $g_i$ as $g_i=\varphi_s$ for
$j_1+\cdots+j_{s-1}*0$ the (multiplication)
transformation $T_tx=tx$ either from $R^\nu$ to $R^\nu$ or from the
torus $[-\pi,\pi)^\nu$ to the torus $[-t\pi,t\pi)^\nu$. Given a
spectral measure $G$ on $R^\nu$ or on $[-\pi,\pi)^\nu$ define the
spectral measure $G_t$ on $R^\nu$ or on $[-t\pi,t\pi)^\nu$ by the
formula $G_t(A)=G(\frac At)$ for all measurable sets~$A$, and
similarly define the function
$f_{k,t}(x_1,\dots,x_k)=f_k(tx_1,\dots,tx_k)$ for all measurable
functions~$f_k$ of $k$ variables, $k=1,2,\dots$, with
$x_j\in R^\nu$ or $x_j\in [-\pi,\pi)^\nu$ for all $1\le j\le k$,
and put $f_{0,t}=f_0$. If
$f=(f_0,f_1,\dots)\in\textrm{\rm Exp}\,{\cal H}_G$, then
$f_t=(f_{0,t},f_{1,t},\dots)\in\textrm{\rm Exp}\,{\cal H}_{G_t}$, and
\begin{eqnarray*}
&&f_0+\sum_{n=1}^\infty\int\frac1{n!}f_n(x_1,\dots,x_n)
Z_G(\,dx_1)\dots Z_G(\,dx_n) \\
&&\qquad \stackrel{\Delta}{=}f_{0,t}+\sum_{n=1}^\infty\frac1{n!}
\int f_{n,t}(x_1,\dots,x_n)Z_{G_t}(\,dx_1) \dots Z_{G_t}(\,dx_n),
\end{eqnarray*}
where $Z_G$ and $Z_{G_t}$ are Gaussian random spectral measures
corresponding to $G$ and~$G'$.}
\medskip\noindent
{\it Proof of Lemma~4.6.}\/ It is easy to see that
$f_t=(f_{0,t},f_{1,t},\dots)\in\textrm{\rm Exp}\,{\cal H}_{G_t}$.
Moreover, we may define the random spectral measure $Z_{G_t}$ in the
identity we want to prove by the formula $Z_{G_t}(A)=Z_G(\frac At)$.
But with such a choice of $Z_{G_t}$ we can write even $=$ instead of
$\stackrel{\Delta}{=}$ in this formula. \hfill$\qed$
\medskip
The next result shows a relation between Wick polynomials and
Wiener--It\^o integrals.
\medskip\noindent
{\bf Theorem 4.7.} {\it Let a stationary Gaussian field be given, and
let $Z_G$ denote the random spectral measure adapted to it. Let
$P(x_1,\dots,x_m)=\sum c_{j_1,\dots,j_n}x_{j_1}\cdots x_{j_n}$
be a homogeneous polynomial of degree~$n$, and let
$h_1,\dots,h_m\in{\cal H}_G^1$. (Here $j_1,\dots,j_n$ are $n$ indices
such that $1\le j_l\le m$ for all $1\le l\le n$. It is possible that
$j_l=j_{l'}$ also if $l\neq l'$.) Define the random variables
$\xi_j=\int h_j(x)Z_G(\,dx)$, $j=1,2,\dots,m$, and the function
$\tilde P(u_1,\dots,u_n)=\sum c_{j_1,\dots,j_n} h_{j_1}(u_1)\cdots
h_{j_n}(u_n)$. Then
$$
\colon\!P(\xi_1,\dots,\xi_m)\!\colon=\int \tilde P(u_1,\dots,u_n)
Z_G(\,du_1)\dots Z_G(\,du_n).
$$
}
\medskip\noindent
{\it Remark.} If $P$ is a polynomial of degree $n$, then it can be
written as $P=P_1+P_2$, where $P_1$ is a homogeneous polynomial of
degree~$n$, and $P_2$ is a polynomial of degree less than~$n$.
Obviously,
$$
\colon\!P(\xi_1,\dots,\xi_m)\!\colon=\colon\!P_1(\xi_1,\dots,\xi_m)\!\colon
$$
\medskip\noindent
{\it Proof of Theorem 4.7.}\/ It is enough to show that
$$
\colon\!\xi_{j_1}\cdots\xi_{j_n}\!\colon=\int h_{j_1}(u_1)\cdots h_{j_n}(u_n)
Z_G(\,du_1)\dots Z_G(\,du_n).
$$
If $h_1,\dots,h_m\in{\cal H}_G^1$ are orthonormal, (all functions $h_l$
have norm~1, and if $l\neq l'$, then $h_l$ and $h_{l'}$ are either
orthogonal or $h_l=h_{l'}$), then this relation follows from a comparison
of Corollary~2.3 with It\^o's formula. In the general case an orthonormal
system $\bar h_1,\dots,\bar h_m$ can be found such that
$$
h_j=\sum_{k=1}^m c_{j,k}\bar h_k,\quad j=1,\dots,m
$$
with some real constants $c_{j,k}$. Set $\eta_k=\int \bar h_jZ_G(\,dx)$.
Then
\begin{eqnarray*}
\colon\!\xi_{j_1}\cdots\xi_{j_n}\!\colon
&=&\colon\! \left(\sum_{k=1}^m c_{j_1,k}\eta_k\right)
\cdots\left(\sum_{k=1}^m c_{j_n,k}\eta_k\right)\!\colon \\
&=&\sum_{k_1,\dots,k_n} c_{j_1,k_1}\cdots c_{j_n,k_n}
\colon\!\eta_{k_1}\cdots\eta_{k_n}\!\colon \\
&=&\sum_{k_1,\dots,k_n} c_{j_1,k_1}\cdots c_{j_n,k_n}
\int \bar h_{k_1}(u_1)\cdots\bar h_{k_n}(u_n)Z_G(\,du_1)\dots Z_G(du_n)\\
&=&
\int h_{j_1}(u_1)\cdots h_{j_n}(u_n)Z_G(\,du_1)\dots Z_G(du_n)
\end{eqnarray*}
as we claimed. \hfill$\qed$
\medskip
We finish this chapter by showing how the Wiener--It\^o integral can
be defined if the spectral measure~$G$ may have atoms. We do this
although such a construction seems to have a limited importance as
in most applications the restriction that we apply the Wiener--It\^o
integral only in the case of a non-atomic spectral measure~$G$
causes no serious problem. If we try to give this definition by
modifying the original one, then we have to split up the atoms. The
simplest way we found for this splitting up, was the use
of randomization.
Let $G$ be a spectral measure on $R^\nu$, and let $Z_G$ be a
corresponding Gaussian spectral random measure on a probability
space $(\Omega,{\cal A},P)$. Let us define a new spectral measure
$\hat G=G\times\lambda_{[-\frac12,\frac12]}$ on $R^{\nu+1}$, where
$\lambda_{[-\frac12,\frac12]}$ denotes the uniform distribution on the
interval $[-\frac12,\frac12]$. If the probability space
$(\Omega,{\cal A},P)$ is sufficiently rich, a random spectral measure
$Z_{\hat G}$ corresponding to $\hat G$ can be defined on it in such a
way that $Z_{\hat G}(A\times [-\frac12,\frac12])=Z_G(A)$ for
all $A\in{\cal B}^\nu$. For $f\in \bar{{\cal H}}_G^n$ we define the function
$\hat f\in\bar{{\cal H}}_{\hat G}^n$ by the formula
$\hat f(y_1,\dots,y_n)=f(x_1,\dots,x_n)$ if $y_j$ is the juxtaposition
$(x_j,u_j)$, \ $x_j\in R^\nu$, $u_j\in R^1$, $j=1,2,\dots,n$. Finally
we define the Wiener--It\^o integral in the general case by the
formula
$$
\int f(x_1,\dots,x_n)Z_G(\,dx_1)\dots Z_G(\,dx_n)
=\int \hat f(y_1,\dots,y_n)Z_{\hat G}(\,dy_1)\dots Z_{\hat G}(dy_n).
$$
(What we actually have done was to introduce a virtual new
coordinate~$u$. With the help of this new coordinate we could reduce
the general case to the special case when $G$ is non-atomic.) If $G$
is a non-atomic spectral measure, then the new definition of
Wiener--It\^o integrals coincides with the original one. It is easy to
check this fact for one-fold integrals, and then It\^o's formula
proves it for multiple integrals. It can be seen with the help of
It\^o's formula again, that all results of this chapter remain valid
for the new definition of Wiener--It\^o integrals. In particular, we
formulate the following result.
Given a stationary Gaussian field let $Z_G$ be the random spectral
measure adapted to it. All $f\in{\cal H}_G^n$ can be written in the
form
\begin{equation}
f(x_1,\dots,x_n)=\sum
c_{j_1,\dots,j_n}\varphi_{j_1}(x_1)\cdots\varphi_{j_n}(x_n) \label{(4.8)}
\end{equation}
with some functions $\varphi_j\in{\cal H}_G^1$, $j=1,2,\dots$. Define
$\xi_j=\int\varphi_j(x)Z_G(\,dx)$. If $f$ has the form~(\ref{(4.8)}), then
$$
\int f(x_1,\dots,x_n)Z_G(\,dx_1)\dots Z_G(\,dx_n)=
\sum c_{j_1,\dots,j_n}\colon\!\xi_{j_1}\cdots\xi_{j_n}\!\colon.
$$
The last identity would provide another possibility for defining
Wiener--It\^o integrals.
\chapter{The proof of It\^o's formula. The diagram formula
and some of its consequences}
We shall prove It\^o's formula with the help of the following
\medskip\noindent
{\bf Proposition 5.1.} {\it Let $f\in\bar{{\cal H}}_G^n$ and
$h\in\bar{{\cal H}}_G^1$. Let us define the functions
$$
f\underset{k}{\times} h(x_1,\dots,x_{k-1},x_{k+1},\dots,x_n)
=\int f(x_1,\dots,x_n)\overline{h(x_k)} G(\,dx_k), \quad k=1,\dots,n,
$$
and
$$
fh(x_1,\dots,x_{n+1})=f(x_1,\dots,x_n)h(x_{n+1}).
$$
Then $f\underset{k}{\times} h$, $k=1,\dots,n$, and $fh$ are in
$\bar{{\cal H}}_G^{n-1}$ and $\bar{{\cal H}}_G^{n+1}$
respectively, and
their norm satisfies the inequality
$\|f\underset{ k}{\times} h\|\le\|f\|\cdot\|h\|$ and
$\|fh\|\le\|f\|\cdot\|h\|$. The relation
$$
n!I_G(f)I_G(h)=(n+1)!I_G(fh)+\sum_{k=1}^n (n-1)!
I_G(f\underset{k}{\times}h)
$$
holds true.}
\medskip
We shall get Proposition 5.1 as the special case of the
diagram formula formulated in Theorem~5.3.
\medskip\noindent
{\it Remark.}\/ There is a small inaccuracy in the formulation
of Lemma~5.1. We considered the Wiener--It\^o integral of the
function $f\underset{k}{\times} h$ with arguments
$x_1$,\dots, $ x_{k-1}$, $x_{k+1}$,\dots, $x_n$, while we
defined this integral for functions with arguments
$x_1,\dots,x_{n-1}$. We can correct this inaccuracy for instance
by reindexing the variables of $f\underset{k}{\times} h$
and working with the function
$$
(f\underset{k}{\times} h)'(x_1,\dots,x_{n-1})=
f\underset{k}{\times} h(x_{\alpha_k(1)},\dots,x_{\alpha_k(k-1)},
x_{\alpha_k(k+1)},\dots,x_{\alpha_k(n)})
$$
instead of $f\underset{k}{\times} h$, where $\alpha_k(j)=j$
for $1\le j\le k-1$, and $\alpha_k(j)=j-1$ for $k+1\le j\le n$.
\medskip
We also need the following recursion formula for Hermite polynomials.
\medskip\noindent
{\bf Lemma 5.2.} {\it The identity
$$
H_n(x)=xH_{n-1}(x)-(n-1)H_{n-2}(x) \quad\textrm{for \ }n=1,2,\dots,
$$
holds with the notation $H_{-1}(x)\equiv0$.}
\medskip\noindent
{\it Proof of Lemma 5.2.}
\begin{eqnarray*}
H_n(x)=(-1)^ne^{x^2/2}\frac{d^n}{dx^n}(e^{-x^2/2})
&=&-e^{x^2/2}\frac d{dx}\left(H_{n-1}(x)e^{-x^2/2}\right)\\
&=x&H_{n-1}(x)-\frac d{dx}H_{n-1}(x).
\end{eqnarray*}
Since $\frac d{dx}H_{n-1}(x)$ is a polynomial of order $n-2$
with leading coefficient $n-1$ we can write
$$
\frac d{dx} H_{n-1}(x)=(n-1)H_{n-2}(x)+\sum_{j=0}^{n-3}c_jH_j(x).
$$
To complete the proof of Lemma~5.2 it remains to show that
in the last expansion all coefficients~$c_j$ are zero. This
follows from the orthogonality of the Hermite polynomials and
the calculation
\begin{eqnarray*}
\int e^{-x^2/2}H_j(x)\frac d{dx}H_{n-1}(x)\,dx
&=&-\int H_{n-1}(x)\frac d{dx}(e^{-x^2/2}H_j(x))\,dx\\
&=&\int e^{-x^2/2}H_{n-1}(x)P_{j+1}(x)\,dx=0
\end{eqnarray*}
with the polynomial $P_{j+1}(x)=xH_j(x)-\frac d{dx}H_j(x)$ of
order~$j+1$ for $j\le n-3$. \hfill$\qed$
\medskip\noindent
{\it Proof of Theorem 4.3 via Proposition 5.1.}\/ We prove Theorem~4.3
by induction. Theorem~4.3 holds for $N=1$. Assume that it holds
for~$N-1$. Let us define the functions
\begin{eqnarray*}
f(x_1,\dots,x_{N-1})&=&g_1(x_1)\cdots g_{N-1}(x_{N-1})\\
h(x)&=&g_N(x).
\end{eqnarray*}
Then
\begin{eqnarray*}
J&=&\int g_1(x_1)\cdots g_N(x_N)Z_G(\,dx_1)\dots Z_G(\,dx_N)\\
&=&N!\,I_G(fh)=(N-1)!\,I_G(f)I_G(h)-\sum_{k=1}^{N-1} (N-2)!\,
I_G(f\underset{k}{\times} h)
\end{eqnarray*}
by Proposition~5.1. We can show with the help of our induction
hypothesis that
\begin{eqnarray*}
J&&=H_{j_1}\left(\int\varphi_1(x)Z_G(\,dx)\right)\cdots
H_{j_{m-1}}\left(\int\varphi_{m-1}(x)Z_G(\,dx)\right) \\
&&\qquad\qquad\qquad H_{j_m-1}\left(\int\varphi_{m}(x)Z_G(\,dx)\right)
\int\varphi_m(x)Z_G(\,dx)\\
&&\qquad-(j_m-1)H_{j_1}\left(\int\varphi_1(x)Z_G(\,dx)\right)\cdots
H_{j_{m-1}}\left(\int\varphi_{m-1}(x)Z_G(\,dx)\right)\\
&&\qquad\qquad\qquad H_{j_m-2}\left(\int\varphi_m(x)Z_G(\,dx)\right),
\end{eqnarray*}
where $H_{j_m-2}(x)=H_{-1}(x)\equiv0$ if $j_m=1$. This relation
holds, since
\begin{eqnarray*}
&&f\underset{k}{\times} h(x_1,\dots,x_{k-1},x_{k+1},\dots,x_{N-1})
=\int g_1(x_1)\cdots g_{N-1}(x_{N-1})
\overline{\varphi_m(x_k)}G(\,dx_k)\\
&&\quad \,\,=\left\{
\begin{array}{l}
0 \quad \textrm{if \ } k\le N-j_m\\
g_1(x_1)\cdots g_{k-1}(x_{k-1})g_{k+1}(x_{k+1})\cdots
g_{N-1}(x_{N-1}) \quad \textrm{if } N-j_m2$ follows by induction.
We shall use the notation $n_1=n$, $n_2=m$, and we write
$x_1,\dots,x_{n+m}$ instead of
$x_{(1,1)},\dots,x_{(n,1)},x_{(1,2)}\dots,x_{(m,2)}$. It is clear
that the function $h_\gamma$ satisfies Property~(a) of the classes
$\bar{{\cal H}}_G^j$ defined in Chapter~4. We show that Part~(A)
of Theorem~5.3 is a consequence of the Schwartz inequality.
To prove this estimate on the norm of $h_\gamma$ it is enough
to restrict ourselves to such diagrams~$\gamma$ in which the vertices
$(n,1)$ and $(m,2)$, \ $(n-1,1)$ and $(m-1,2)$,\dots, $(n-k,1)$ and
$(m-k,2)$ are connected by edges with some $0\le k\le \min(n,m)$.
In this case we can write
\begin{eqnarray*}
&&|h_\gamma(x_1,\dots,x_{n-k-1},x_{n+1},\dots,x_{n+m-k-1})|^2 \\
&&\qquad=\biggl|\int h_1(x_1,\dots,x_n)h_2(x_{n+1},
\dots,x_{n+m-k-1},-x_{n-k},
\dots,-x_n)\\
&&\qquad\qquad\qquad G(\,dx_{n-k})\dots G(\,dx_n)\biggr|^2 \\
&&\qquad\le \int |h_1(x_1,\dots,x_n)|^2 G(\,dx_{n-k})\dots G(\,dx_n) \\
&&\qquad\qquad\qquad
\int |h_2(x_{n+1},\dots,x_{n+m})|^2 G(\,dx_{n+m-k})\dots G(\,dx_{n+m})
\end{eqnarray*}
by the Schwartz inequality and the symmetry $G(-A)=G(A)$ of the
spectral measure~$G$. Integrating this inequality with respect to
the free variables we get Part~(A) of Theorem~5.3.
In the proof of Part~(B) first we restrict ourselves to
the case when $h_1\in\hat{\bar{{\cal H}}}_G^n$ and
$h_2\in\hat{\bar{{\cal H}}}_G^m$. Assume that they are
adapted to a regular system
${\cal D}=\{\Delta_j,\;j=\pm1,\dots,\pm N\}$ of subsets
of $R^n$ with finite measure~$G$. We may even assume
that all $\Delta_j\in{\cal D}$ satisfy the inequality
$G(\Delta_j)<\varepsilon$ with some $\varepsilon>0$ to be
chosen later, because otherwise we could split up the sets
$\Delta_j$ into smaller ones. Let us fix a point
$u_j\in\Delta_j$ in all sets $\Delta_j\in{\cal D}$. Put
$K_i=\sup\limits_x|h_i(x)|$, $i=1,2$, and let $A$ be a
cube containing all~$\Delta_j$.
We can write
\begin{eqnarray*}
I=n!I_G(h_1)m!I_G(h_2)&&={\sum}' h_1(u_{j_1},\dots,u_{j_n})
h_2(u_{k_1},\dots,u_{k_m})\\
&&\qquad\qquad Z_G(\Delta_{j_1})\cdots Z_G(\Delta_{j_n})
Z_G(\Delta_{k_1})\cdots Z_G(\Delta_{k_m})
\end{eqnarray*}
with the numbers $u_{j_p}\in\Delta_{j_p}$ and
$u_{k_r}\in\Delta_{k_r}$ we have fixed, where the summation in
$\sum'$ goes through all pairs $((j_1,\dots,j_n),(k_1,\dots,k_m))$,
$j_p,\,k_r\in\{\pm1,\dots,\pm N\}$, \ $p=1,\dots,n$, $r=1,\dots,m$,
such that $j_p\neq\pm j_{\bar p}$ and $k_r\neq\pm k_{\bar r}$ if
$p\neq\bar p$ or $r\neq\bar r$.
Write
\begin{eqnarray*}
I&&=\sum_{\gamma\in\Gamma}{\sum}^\gamma \,
h_1(u_{j_1},\dots,u_{j_n}) h_2(u_{k_1},\dots,u_{k_m}) \\
&&\qquad\qquad\qquad Z_G(\Delta_{j_1})\cdots Z_G(\Delta_{j_n})
Z_G(\Delta_{k_1})\cdots Z_G(\Delta_{k_m}),
\end{eqnarray*}
where $\sum^\gamma$ contains those terms of $\sum'$ for which
$j_p=k_r$ or $j_p=-k_r$ if the vertices $(1,p)$ and $(2,r)$ are
connected in $\gamma$, and $j_p\neq \pm k_r$ if $(1,p)$ and $(2,r)$
are not connected. Let us define the sets
\begin{eqnarray*}
A_1&=&A_1(\gamma)=\{p\colon\;p\in\{1,\dots,n\},\textrm{ and no
edge starts from }(p,1)\textrm{ in }\gamma\},\\
A_2&=&A_2(\gamma)=\{r\colon\;r\in\{1,\dots,m\},\textrm{ and no
edge starts from }(r,2)\textrm{ in }\gamma\}
\end{eqnarray*}
and
\begin{eqnarray*}
B=B(\gamma)&&=\{(p,r)\colon\; p\in\{1,\dots,n\},\;
r\in\{1,\dots,m\},\\
&&\qquad\qquad (p,1) \textrm{ and } (r,2)
\textrm{ are connected in }\gamma\}
\end{eqnarray*}
together with the map $\alpha\colon\;\{1,\dots,n\}\setminus A_1\to
\{1,\dots,m\}\setminus A_2$ defined as
\begin{equation}
\alpha(p)=r \quad \textrm{if } (p,r)\in B \quad \textrm{for all \ }
p\in\{1,\dots,n\}\setminus A_1. \label{(5.2)}
\end{equation}
Let $\Sigma^\gamma$ denote the value of the inner sum $\sum^\gamma$
for some $\gamma\in\Gamma$ in the last summation formula, and
write it in the form
$$
\Sigma^\gamma=\Sigma_1^\gamma+\Sigma_2^\gamma
$$
with
\begin{eqnarray*}
\Sigma_1^\gamma&&={\sum}^\gamma h_1(u_{j_1},\dots,u_{j_n})
h_2(u_{k_1},\dots,u_{k_m})
\prod_{p\in A_1}Z_G(\Delta_{j_p})\prod_{r\in A_2} Z_G(\Delta_{k_r})\\
&&\qquad\qquad\qquad\cdot
\prod_{(p,r)\in B} E\left(Z_G(\Delta_{j_p})Z_G(\Delta_{k_r})\right)
\end{eqnarray*}
and
\begin{eqnarray*}
\Sigma_2^\gamma&&={\sum}^\gamma h_1(u_{j_1},\dots,u_{j_n})
h_2(u_{k_1},\dots,u_{k_m})
\prod_{p\in A_1}Z_G(\Delta_{j_p})\prod_{r\in A_2} Z_G(\Delta_{k_r})\\
&&\qquad \cdot\left[\prod_{(p,r)\in B}Z_G(\Delta_{j_p})Z_G(\Delta_{k_r})
-E\left( \prod_{(p,r)\in B}
Z_G(\Delta_{j_p})Z_G(\Delta_{k_r})\right)\right].
\end{eqnarray*}
The random variables $\Sigma^\gamma_1$ and $\Sigma^\gamma_2$ are
real valued. To see this observe that if the sum defining these
expressions contains a term with arguments $\Delta_{j_p}$, and
$\Delta_{k_r}$, then it also contains the term with arguments
$-\Delta_{j_p}$ and $-\Delta_{k_r}$. This fact together with
property~(v) of the random spectral measure~$Z_G$ and the
analogous property of the functions~$h_1$ and~$h_2$ imply that
$\Sigma^\gamma_1=\overline{\Sigma^\gamma_1}$ and
$\Sigma^\gamma_2=\overline{\Sigma^\gamma_2}$. Hence these random
variables are real valued. As a consequence, we can bound
$(n+m-2|\gamma|)!I_G(h_\gamma)-\Sigma_1^\gamma$ and
$\Sigma_2^\gamma$ by means of their second moment.
We are going to show that $\Sigma_1^\gamma$ is a good
approximation of $(n+m-2|\gamma|)!\,I_G(h_\gamma)$, and
$\Sigma_2^\gamma$ is negligibly small. This implies that
$(n+m-2|\gamma|)!I_G(h_\gamma)$ well
approximates~$\Sigma^\gamma$. The proofs are based on
some simple ideas, but unfortunately their description
demands a complicated notation which makes their reading
unpleasant.
To estimate $(n+m-2|\gamma|)!I_G(h_\gamma)-\Sigma_1^\gamma$
we rewrite $\Sigma^\gamma_1$ as a Wiener--It\^o integral with
a kernel function adapted to the regular system~${\cal D}$
which is close to~$h_\gamma$. To find this kernel function
we rewrite the sum defining~$\Sigma^\gamma_1$ by first
fixing the variables $u_{j_p}$, $p\in A_1$, and $u_{k_r}$,
$r\in A_2$, and summing up by the remaining variables, and
after this summing by the variables fixed at the first step.
We get that
\begin{eqnarray}
\Sigma_1^\gamma=\sum_{\substack{ j_p\colon\; 1\le |j_p|\le N
\textrm{ for all }p\in A_1\\
k_r\colon\; 1\le |k_r|\le N \textrm{ for all }r\in A_2}}
&&h_{\gamma,1}(j_p,\;p\in A_1,\, k_r,\, r\in A_2) \nonumber \\
&&\qquad \prod_{p\in A_1}Z_G(\Delta_{j_p})\prod_{r\in A_2}
Z_G(\Delta_{k_r})
\label{(5.3)}
\end{eqnarray}
with a function $h_{\gamma,1}$ depending on the arguments
$j_p$, $p\in A_1$, and $k_r$, $r\in A_2$, with values
$j_p,k_r\in\{\pm1,\dots,\pm N\}$ defined with the help another
function $h_{\gamma,2}$ described below. It also depends on the
arguments $j_p$, $p\in A_1$, and $k_r$, $r\in A_2$, with values
$j_p,k_r\in\{\pm1,\dots,\pm N\}$. More explicitly,
formula~(\ref{(5.3)}) holds with the function $h_{\gamma,1}$
defined as
\begin{equation}
h_{\gamma,1}(j_p,\,p\in A_1,\;k_r,\,r\in A_2)=0 \label{(5.4a)}
\end{equation}
if the numbers in the set
$\{\pm j_p\colon\;p\in A_1\}\cup\{\pm k_r\colon\;r\in A_2\}$
are not all different, and
\begin{equation}
h_{\gamma,1}(j_p,\,p\in A_1,\;k_r,\,r\in A_2)
=h_{\gamma,2}(j_p,\,p\in A_1,\;k_r,\,r\in A_2) \label{(5.4b)}
\end{equation}
if all numbers $\pm j_p$, $p\in A_1$, and $\pm k_r$,
$r\in A_2$ are different, where we define the function
$h_{\gamma,2}(j_p,\,p\in A_1,\;k_r,\,r\in A_2)$ for all
sequences $j_p$, $p\in A_1$ and $k_r$, $r\in A_2$, with
$j_p,k_r\in\{\pm1,\dots,\pm N\}$ (i.e. also in the case
when some of the arguments~$j_p$, $p\in A_1$, or $k_r$,
$r\in A_2$, agree) by the formula
\begin{eqnarray}
h_{\gamma,2}(j_p,\;p\in A_1,\,k_r,\,r\in A_2)
&&={\sum}^{\gamma,1} h_1(u_{j_1},\dots,u_{j_n})
h_2(u_{k_1},\dots,u_{k_m})
\nonumber \\
&&\qquad\cdot \prod_{(p,r)\in B}
E\left(Z_G(\Delta_{j_p})Z_G(\Delta_{k_r})\right).
\label{(5.5)}
\end{eqnarray}
The value of the sum $\sum^{\gamma,1}$ in
formula~(\ref{(5.5)}) which depends on the arguments
$j_p$, $p\in A_1$, and $k_r$, $r\in A_2$, is defined
in the following way. We sum up for such sequences
$(j_1,\dots,j_n)$ and $(k_1,\dots,k_m)$ whose
coordinates with $p\in A_1$ and $q\in A_2$ are
fixed, and whose coordinates with indices
$p\in \{1,\dots,n\}\setminus A_1$
and $r\in \{1,\dots,m\}\setminus A_2$ satisfy the
following conditions. Put
$C=\{\pm j_p,\,p\in A_1\}\cup\{\pm k_r,\,r\in A_2\}$.
We demand that all numbers $j_p$ and $k_r$ with
indices $p\in\{1,\dots,n\}\setminus A_1$ and
$r\in\{1,\dots,m\}\setminus A_2$ are such that
$j_p,k_r\in\{\pm1,\dots,\pm N\}\setminus C$.
To formulate the remaining conditions let us write all
numbers $r\in\{1,\dots,m\}\setminus A_2$ in the form
$r=\alpha(p)$, $p\in\{1,\dots,n\}\setminus A_1$ with
the map~$\alpha$ defined in~(\ref{(5.2)}). We also
demand that only such sequences appear in the
summation whose coordinates $k_r=k_{\alpha(p)}$
satisfy the condition $k_{\alpha(p)}=\pm j_p$ for all
$p\in\{1,\dots,n\}\setminus A_1$. Beside this, all
numbers $\pm j_p$, $p\in\{1,\dots,n\}\setminus A_1$,
must be different. The summation in $\sum^{\gamma,1}$
is taken for all such sequences $j_p$, $p\in\{1,\dots,n\}$
and $k_r$, $r\in\{1,\dots,m\}$, whose coordinates with
$p\in\{1,\dots,n\}\setminus A_1$ and
$r\in\{1,\dots,m\}\setminus A_2$ satisfy the above
conditions.
Formula~(\ref{(5.5)}) can be rewritten in a simpler form.
To do this let us first observe that the condition
$k_{\alpha(p)}=\pm j_p$ can be replaced by the condition
$k_{\alpha(p)}=-j_p$ in it, and we can write
$G(\Delta_{j_p})$ instead of the term
$EZ_G(\Delta_{j_p})Z_G(\Delta_{k_r})$ (with $(p,r)\in B$)
in the product at the end of~(\ref{(5.5)}). This follows
from the fact that
$EZ_G(\Delta_{j_p})Z_G(\Delta_{k_r})=EZ_G(\Delta_{j_p})^2=0$
if $k_r=j_p$ and $EZ_G(\Delta_{j_p})Z_G(\Delta_{k_r})
=EZ_G(\Delta_{j_p}Z_G(-\Delta_{j_p})=G(\Delta_{j_p})$ if
$k_r=-j_p$. Beside this, the expression in~(\ref{(5.5)})
does not change if we take summation for all such
sequences for which the number $j_p$ with coordinate
$p\in\{1,\dots,n\}\setminus A$ takes all possible
values $j_p\in\{\pm1,\dots,\pm N\}$, because in such a
way we only attach such terms to the sum which equal zero.
This follows from the fact that both functions $h_1$ and
$h_2$ are adapted to the regular system~${\cal D}$, hence
$h_1(u_{j_1},\dots,u_{j_n})h_2(u_{k_1},\dots,u_{k_m})=0$
if for an index
$p\in\{1,\dots,n\}\setminus A_1$ $j_p=\pm j_{p'}$ with
$p\neq p'$ or $j_p=-k_r$ with $(p,r)\in B$, and beside
this there exists some $r'\in A_2$ such that
$j_p=\pm k_{r'}$.
The above relations enable us to rewrite~(\ref{(5.5)}) in
the following way. Let us define that map $\alpha^{-1}$
on the set $\{1,\dots,m\}\setminus A_2$ which is the
inverse of the map $\alpha$ defined in~(\ref{(5.2)}),
i.e.\ $\alpha^{-1}(r)=p$ if $(p,r)\in B$. With this
notation we can write
\begin{eqnarray}
&&h_{\gamma,2}(j_p,\,p\in A_1,\;k_r,\,r\in A_2) \nonumber \\
&&\qquad= \!\!\!\!\!\!\!\!\!\!
\sum_{\substack{ j_p,\,p\in \{1,\dots,n\}\setminus A_1,\\
1\le |j_p|\le N \textrm{ for all indices } p }}
\!\! \!\!\!\!\!\!\!\!\!\!
h_1(u_{j_1},\dots,u_{j_n})
h_2(u_{k_r},\,r\in A_2, -u_{j_{\alpha^{-1}(r)}},
\,r\in\{1,\dots,m\}\setminus A_2)\nonumber \\
&&\qquad\qquad\qquad\qquad
\prod_{p\in \{1,\dots,n\}\setminus A_1} G(\Delta_{j_p}).
\label{(5.6)}
\end{eqnarray}
Formula~(\ref{(5.6)}) can be rewritten as
\begin{eqnarray}
&&h_{\gamma,2}(j_p,\,p\in A_1,\;k_r,\,r\in A_2) \label{(5.7)} \\
&&\qquad=\int h_1(u_{j_p},\,p\in A_1,\;
x_p,\,p\in\{1,\dots,n\}\setminus A_1) \nonumber \\
&&\qquad\qquad h_2(u_{k_r},\,r\in A_2,\; -x_{\alpha^{-1}(r)},
\,r\in\{1,\dots,m\}\setminus A_2)
\prod_{p\in \{1,\dots,n\}\setminus A_1} G(\,dx_p). \nonumber
\end{eqnarray}
We define with the help of $h_{\gamma,1}$ and
$h_{\gamma,2}$ two new functions on $R^{(n+m-2|\gamma|)\nu}$
with arguments $x_1,\dots,x_{n+m-2|\gamma}$. The first
one will be the kernel function of the Wiener--It\^o
integral expressing~$\Sigma^\gamma_1$, and the second
one will be equal to the function~$h_\gamma$ defined
in~(\ref{(5.1)}). We define these functions in two
steps. In the first step we reindex the arguments of
the functions~$h_{1,\gamma}$ and~$h_2,\gamma$ to get
functions depending on sequences
$j_1,\dots,j_{n+m-2|\gamma|}$. For this goal we list
the elements of the sets $A_1$ and $A_2$ as
$A_1=\{p_1,\dots,p_{n-|\gamma|}\}$ with
$1\le p_10$ can be chosen arbitrarily small, Part~B
is proved in the special case $h_1\in\hat{\bar{{\cal H}}}_G^n$,
$h_2\in\hat{\bar{{\cal H}}}_G^m$.
If $h_1\in\bar{{\cal H}}_G^n$ and $h_2\in\bar{{\cal H}}_G^m$,
then let us choose a sequence of functions
$h_{1,r}\in\hat{\bar{{\cal H}}}_G^n$ and
$h_{2,r}\in\hat{\bar{{\cal H}}}_G^m$ such that
$h_{1,r}\to h_1$ and $h_{2,r}\to h_2$ in the norm of the
spaces $\bar{{\cal H}}_G^n$ and $\bar{{\cal H}}_G^m$
respectively. Define the functions $\hat h_\gamma(r)$ and
$h_\gamma(r)$ in the same way as $h_\gamma$, but substitute
the pair of functions $(h_1,h_2)$ by $(h_{1,r},h_2)$
and $(h_{1,r},h_{2,r})$ in their definition. We shall
show by the help of Part~(A) that
$$
E|I_G(h_1)I_G(h_2)-I_G(h_{1,r})I_G(h_{2,r})|\to0,
$$
and
$$
E|I_G(h_\gamma)-I_G(h_\gamma(r))|\to0
\quad\textrm{for all }\gamma\in\Gamma
$$
as $r\to\infty$. Then a simple limiting procedure shows that
Theorem~5.3 holds for all $h_1\in\bar{{\cal H}}_G^n$ and
$h_2\in\bar{{\cal H}}_G^m$.
We have
\begin{eqnarray*}
&&E|I_G(h_1)I_G(h_2)-I_G(h_{1,r})I_G(h_{2,r})|\\
&&\qquad \le E|(I_G(h_1-h_{1,r}))I_G(h_2)|
+E|I_G(h_{1,r})I_G(h_2-h_{2,r})|\\
&&\qquad \le\frac1{n!\,m!}
\left(\|h_1-h_{1,r}\|^{1/2}\|h_2\|^{1/2}
+\|h_2-h_{2,r}\|^{1/2}\|h_{1,r}\|\right)\to0,
\end{eqnarray*}
and by Part~(A) of Theorem~5.3
\begin{eqnarray*}
&&E|I_G(h_\gamma)-I_G(h_\gamma(r))|\le
E|I_G(h_\gamma)-I_G(\hat h_\gamma(r))|+
E|I_G(h_\gamma(r))-I_G(\hat h_\gamma(r))|\\
&&\qquad \le\|h_\gamma-\hat h_\gamma(r)\|^{1/2}
+\|h_\gamma(r)-\hat h_\gamma(r)\|^{1/2} \\
&&\qquad \le\|h_1-\hat h_{1,r}\|^{1/2}\|h_2\|^{1/2}
+\|h_2-\hat h_{2,r}\|^{1/2}\|h_{1,r}\|^{1/2}\to0.
\end{eqnarray*}
Theorem~5.3 is proved. \hfill$\qed$
\medskip
We formulate some consequences of Theorem~5.3. Let
$\bar\Gamma\subset\Gamma$ denote the set of complete
diagrams, i.e.\ let a diagram $\gamma\in\bar\Gamma$
if an edge enters in each vertex of~$\gamma$.
\index{complete diagram}
We have $EI(h_\gamma)=0$ for all
$\gamma\in\Gamma\setminus\bar\Gamma$, since~(\ref{(4.3)})
holds for all $f\in\bar{{\cal H}}_G^n$, $n\ge1$. If
$\gamma\in\bar\Gamma$, then $I(h_\gamma)\in\bar{{\cal H}}_G^0$.
Let $h_\gamma$ denote the value of $I(h_\gamma)$ in this case.
Now we have the following
\medskip\noindent
{\bf Corollary 5.4.} {\it For all
$h_1\in\bar{{\cal H}}_G^{n_1}$,\dots,
$h_n\in\bar{{\cal H}}_G^{n_m}$
$$
En_1!I_G(h_1)\cdots n_m!I_G(h_m)=
\sum_{\gamma\in\bar\Gamma}h_\gamma.
$$
(The sum on the right-hand side equals zero if $\bar\Gamma$ is
empty.)}
\medskip
As a consequence of Corollary~5.4 we can calculate the expectation of
products of Wick polynomials of Gaussian random variables.
Let $X_{k,j}$, $EX_{k,j}=0$, $1\le k\le p$, $1\le j\le n_k$, be a
sequence of Gaussian random variables. We want to calculate the
expected value of the Wick polynomials
$\colon\!X_{k,1}\cdots X_{k,n_k}\!\colon$,
\ $1\le k\le p$, if we know all covariances
$EX_{k,j}X_{\bar k,\bar j}=a((k,j),(\bar k,\bar j))$, %\jmath
$1\le k,\bar k,\le p$, $1\le j\le n_k$, $1\le \bar j\le\bar n_k$. %\jmath
For this goal let us consider the class of closed diagrams
$\bar\Gamma(k_1,\dots,k_p)$, and define the following quantity
$\gamma(A)$ depending on the closed diagrams $\gamma$ and the set
$A$ of all covariances $EX_{k,j}X_{\bar k,\bar j}=a((k,j), %\jmath
(\bar k,\bar j))$ %\jmath.
$$
\gamma(A)=\prod_{((k,j),(\bar k,\bar j))\textrm{ is an edge of }\gamma} %jmath
a((k,j),(\bar k,\bar j)), \quad \gamma\in\Gamma. %\jmath
$$
With the above notation we can formulate the following result.
\medskip\noindent
{\bf Corollary 5.5.} {\it
Let $X_{k,j}$, $EX_{k,j}=0$, $1\le k\le p$, $1\le j\le n_k$, be a
sequence of Gaussian random variables. Let
$a((k,j),(\bar k,\bar j))=EX_{k,j}X_{\bar k,\bar j}$, %\jmath
$1\le k,\bar k,\le p$, $1\le j\le n_k$, $1\le \bar j\le\bar n_k$ %\jmath
denote the covariances of these random variables. Then
the expected value of the product of the Wick polynomials
$\colon\!X_{k,1}\cdots X_{k,n_k}\!\colon$, $1\le k\le p$,
can be expressed as
$$
E\left(\prod_{k=1}^p\colon\!X_{k,1}\cdots X_{k,n_k}\!\colon\right)
=\sum_{\gamma\in\bar\Gamma(k_1,\dots,k_p)} \gamma(A)
$$
with the above defined quantities $\gamma(A)$. In the case when
$\bar\Gamma(k_1,\dots,k_p)$ is empty, e.g. if $k_1+\cdots+k_p$ is an
odd number, the above expectation equals zero.}
\medskip\noindent
{\it Remark.} In the special case when $X_{k,1}=\cdots=X_{k,n_k}=X_k$,
and $EX_k^2=1$ for all indices $1\le k\le p$ Corollary~5.5 provides
a formula for the expectation of the product of Hermite
polynomials of standard normal random variables. In this
case we have $a((k,j),(\bar k,\bar j))=\bar a(k,\bar k)$ with a %\jmath
function $\bar a(\cdot,\cdot)$ not depending on the arguments~$j$
and~$\bar j$, and the left-hand side of the identity in Corollary~5.5 %\jmath
equals $EH_{n_1}(X_1)\cdots H_{n_p}(X_p)$ with standard normal random
variables $X_1,\dots,X_n$ with correlations
$EX_kX_{\bar k}=\bar a(k,\bar k)$.
\medskip\noindent
{\it Proof of Corollary 5.5.}\/ We can represent the random variables
$X_{k,j}$ in the form $X_{k,j}=\sum\limits_pc_{k,j,p}\xi_p$ with some
appropriate coefficients $c_{k,j,p}$, where $\xi_1,\xi_2,\dots$ is
a sequence of independent standard normal random variables. Let
$Z(\,dx)$ denote a random spectral measure corresponding to the
one-dimensional spectral measure with density function
$g(x)=\frac1{2\pi}$ for $|x|<\pi$, and $g(x)=0$ for $|x|\ge\pi$. The
random integrals $\int e^{ipx}Z(\,dx)$, $p=0,\pm1,\pm2,\dots$, are
independent standard normal random variables.
Define $h_{k,j}(x)=\sum\limits_p c_{k,j,p}e^{ipx}$, $k=1,\dots,p$,
$1\le j\le n_k$. The random variables $X_{k,j}$ can be identified
with the random integrals $\int h_{k,j}(x)Z(\,dx)$, $k=1,\dots,p$,
$1\le j\le n_k$, since their joint distributions coincide. Put
$\hat h_k(x_1,\dots,x_{n_k})=\prod\limits_{j=1}^{n_k}h_{k,j}(x_j)$.
It follows from Theorem~4.7 that
$$
\colon\!X_{k,1}\cdots X_{k,n_k}\!\colon=
\int\hat h_k(x_1,\dots,x_{n_k}) Z(\,dx_1)\dots Z(\,dx_{n_k})
=n_k!I(\hat h_k(x_1,\dots,x_{n_k}))
$$
for all $1\le k\le p$. Hence an application of Corollary~5.4
yields Corollary~5.5. One only has to observe that
$\int_{-\pi}^\pi h_{k,j}(x)\overline{h_{\bar k,\bar j}(x)}\,dx %\jmath
=a((k,j),(\bar k,\bar j))$ for all $k,\,k=1,\dots,p$ and %\jmath
$1\le j\le n_k$. \hfill$\qed$
\medskip
Theorem~5.3 states in particular that the product of Wiener--It\^o
integrals with respect to a random spectral measure of a stationary
Gaussian fields belongs to the Hilbert space ${\cal H}$ defined by this
field, since it can be written as a sum of Wiener--It\^o integrals.
This means a trivial measurability condition, and also that the
product has a finite second moment, which is not so trivial.
Theorem~5.3 actually gives the following non-trivial inequality.
Let $h_1\in{\cal H}_G^{n_1}$,\dots, $h_m\in{\cal H}_G^{n_m}$. Let
$|\bar\Gamma(n_1,n_1,\dots,n_m,n_m)|$ denote the number of complete
diagrams in $\bar\Gamma(n_1,n_1,\dots,n_m,n_m)$, and put
$$
C(n_1,\dots,n_m)=\frac{|\bar\Gamma(n_1,n_1,\dots,n_m,n_m)|}
{n_1!\cdots n_m!}.
$$
In the special case $n_1=\cdots=n_m=n$ let
$\bar C(n,m)=C(n_1,\dots,n_m)$. Then
\medskip\noindent
{\bf Corollary 5.6.} {\it
\begin{eqnarray*}
&&E\left[(n_1!I_G(h_1))^2\cdots (n_m! I_G(h_m))^2\right] \\
&&\qquad \le C(n_1,\dots,n_m) E(n_1!I_G(h_1))^2
\cdots (n_m! E(I_G(h_m))^2.
\end{eqnarray*}
In particular,
$$
E\left[(n!I_G(h))^{2m}\right]\le\bar C(n,m)(E(n!I_G(h))^2)^m
\quad \textrm{ if \ } h\in {\cal H}_G^n.
$$
}
\medskip\noindent
Corollary~5.6 follows immediately from Corollary~5.4 by applying it
first for the sequence $h_1,h_1,\dots,h_m,h_m$ and then for the pair
$h_j,h_j$ which yields that
$$
E (n_j!I_G(h_j))^2=n_j!\|h_j\|^2, \quad 1\le j\le m.
$$
One only has to observe that
$|h_\gamma|\le\|h_1\|^2\cdots\|h_m\|^2$ for all complete diagrams
by Part~(A) of Theorem~5.3.
The inequality in Corollary~5.6 is sharp. If $G$ is a finite
measure and $h_1\in H_G^{n_1}$,\dots, $h_m\in H^{n_m}_G$ are constant
functions, then equality can be written in Corollary~5.6. We remark
that in this case $I_G(h_1),\dots,I_G(h_m)$ are constant times the
$n_1$-th,\dots, $n_m$-th Hermite polynomials of the same standard
normal random variable. Let us emphasize that the constant
$C(n_1,\dots,n_m)$ depends only on the parameters $n_1,\dots,n_m$
and not on the form of the functions $h_1,\dots,h_m$. The function
$C(n_1,\dots,n_m)$ is monotone in its arguments. The following
argument shows that
$$
{C(n_1+1,n_2,\dots,n_m)}\ge{C(n_1,\dots,n_m)}
$$
Let us call two complete diagrams in
$\bar\Gamma(n_1,n_1,\dots,n_m,n_m)$ or in
$\bar\Gamma(n_1+1,n_1+1,\dots,n_m,n_m)$ equivalent if they
can be transformed into each other by permuting the vertices
$(1,1),\dots,(1,n_1)$ in $\bar\Gamma(n_1,n_1,\dots,n_m,n_m)$ or
the vertices $(1,1),\dots,(1,n_1+1)$ in
$\bar\Gamma(n_1+1,n_1+1,\dots,n_m,n_m)$. The equivalence classes
have $n_1!$ elements in the first case and $(n_1+1)!$ elements
in the second one. Moreover, the number of equivalence classes is
less in the first case than in the second one. (They would agree if
we counted only those equivalence classes in the second case which
contain a diagram where $(1,n_1+1)$ and $(2,n_1,1)$ are connected
by an edge. Hence
$$
\frac1{n_1!}|\bar\Gamma(n_1,n_1,\dots,n_m,n_m)|\le
\frac1{(n_1+1)!}|\bar\Gamma(n_1+1,n_1+1,\dots,n_m,n_m)|
$$
as we claimed.
The next result
%, formulated in a more elementary way,
may better
illuminate the content of Corollary~5.6.
\medskip\noindent
{\bf Corollary 5.7.} {\it Let $\xi_1,\dots,\xi_k$ be a normal random
vector, and $P(x_1,\dots,x_k)$ a polynomial of degree~$n$. Then
$$
E\left[P(\xi_1,\dots,\xi_k)^{2m}\right]\le\bar C(n,m)(n+1)^m
\left(EP(\xi_1,\dots,\xi_k)^2\right)^m
$$
with the constant $\bar C(n,m)$ introduced before Corollary~5.6.}
\medskip
The multiplying constant $\bar C(n,m)(n+1)^m$ is not sharp in this
case.
%Observe that it does not depend on~$k$.
\medskip\noindent
{\it Proof of Corollary 5.7.}\/ We can write $\xi_j=\int f_j(x)Z(\,dx)$
with some $f_j\in{\cal H}^1$, $j=1,2,\dots,k$, where $Z(\,dx)$ is
the same as in the proof of Corollary~5.5. There exist some
$h_j\in{\cal H}^j$, $j=0,1,\dots,n$, such that
$$
P(\xi_1,\dots,\xi_k)=\sum_{j=0}^n j! I(h_j).
$$
Then
\begin{eqnarray*}
&&EP(\xi_1,\dots,\xi_k)^{2m}
=E\left[\left(\sum_{j=0}^nj!I(h_j)\right)^{2m}\right]
\le (n+1)^m E\left[\sum_{j=0}^n (j!I(h_j))^2\right]^m \\
&&\qquad\le (n+1)^m\sum_{p_1+\cdots+p_n=m}C(p_1,\dots,p_n)(EI(h_0)^2)^{p_0}
\cdots (En!I(h_n)^2)^{p_n}\frac{m!}{p_1!\cdots p_n!} \\
&&\qquad\le (n+1)^m\bar C(n,m)\sum_{p_1+\cdots+p_n=m} (EI(h_0)^2)^{p_0}
\cdots (EI(n!h_n)^2)^{p_n}\frac{m!}{p_1!\cdots p_n!} \\
&&\qquad=(n+1)^m\bar C(n,m)
\left[\sum E (j!I(h_j))^2\right]^m=(n+1)^m \bar C(n,m)
\left(EP(\xi_1,\dots,\xi_k)^2\right)^m.
\end{eqnarray*}
\hfill$\qed$
\chapter{Subordinated random fields. Construction of
self-similar fields}
Let $X_n$, $n\in\textrm{\BBB Z}_\nu$, be a discrete stationary
Gaussian random field, and let the random field $\xi_n$,
$n\in\textrm{\BBB Z}_\nu$, be subordinated to it. Let $Z_G$
denote the random spectral measure adapted to the random field
$X_n$. By Theorem~4.2 the random variable $\xi_0$ can be
represented as
$$
\xi_0=f_0+\sum_{k=1}^\infty \frac1{k!}\int
f_k(x_1,\dots,x_k)Z_G(\,dx_1)\dots Z_G(\,dx_k)
$$
with an appropriate $f=(f_0,f_1,\dots)\in\,\textrm{Exp}\,{\cal H}_G$
in a unique way. This formula together with Theorem~4.4 yields the
following
\medskip\noindent
{\bf Theorem 6.1.} {\it A random field $\xi_n$, $n\in\textrm{\BBB Z}_\nu$,
subordinated to the stationary Gaussian random field $X_n$,
$n\in\textrm{\BBB Z}_\nu$, can be written in the form
\begin{equation}
\xi_n=f_0+\sum_{k=1}^\infty \frac1{k!}\int e^{i((n,x_1+\cdots+x_k)}
f_k(x_1,\dots,x_k)Z_G(\,dx_1)\dots Z_G(\,dx_k), \quad n\in{\BBB Z}_\nu,
\label{(6.1)}
\end{equation}
with some $f=(f_0,f_1,\dots)\in\,\textrm{\rm Exp}\,{\cal H}_G$,
where~$G$ is the spectral measure of the field $X_n$, and $Z_G$ is
the random spectral measure adapted to it. This representation is
unique. It is also clear that formula~(\ref{(6.1)}) defines a subordinated
field for all $f\in\,\textrm{\rm Exp}\,{\cal H}_G$.}
\index{random field subordinated to a stationary Gaussian
random field (discrete case)}
\medskip\noindent
If the spectral measure $G$ has the property $G(\{x\colon\;x_p=u\})=0$
for all $u\in R^1$ and $1\le p\le\nu$, where $x=(x_1,\dots,x_\nu)$
(this is a strengthened form of the non-atomic property), then the
functions
$$
\bar f_k(x_1,\dots,x_k)=f_k(x_1,\dots,x_k)\tilde\chi_0^{-1}(x_1+\cdots+x_k),
\quad k=1,2,\dots,
$$
are meaningful, as functions in the measure space
$(R^{k\nu},{\cal B}^{k\nu},G^k)$, where
$\tilde\chi_n(x)=e^{i(n,x)}
\prod\limits_{p=0}^\nu\frac{e^{ix^{(p)}}-1}{ix^{(p)}}$,
$n\in\textrm{\BBB Z}_\nu$, denotes the Fourier transform of
the indicator function of the $\nu$-dimensional unit cube
$\prod\limits_{p=1}^\nu[n^{(p)},n^{(p)}+1]$. Then the random
variable $\xi_n$ in formula~(\ref{(6.1)}) can be rewritten
in the form
$$
\xi_n=f_0+\sum_{k=1}^\infty \frac1{k!}\int\tilde\chi_n(x_1+\cdots+x_k)
\bar f_k(x_1,\dots,x_k)Z_G(\,dx_1)\dots Z_G(\,dx_k).
$$
Hence the following Theorem~$6.1'$ can be considered as the continuous
time version of Theorem~6.1.
\medskip\noindent
{\bf Theorem~$6.1'$.} {\it Let a generalized random field
$\xi(\varphi)$, $\varphi\in{\cal S}$, be subordinated to a
stationary Gaussian generalized random field $X(\varphi)$,
$\varphi\in{\cal S}$. Let $G$ denote the spectral measure
of the field $X(\varphi)$, and let $Z_G$ be the random
spectral measure adapted to it. Then $\xi(\varphi)$ can
be written in the form
\begin{equation}
\xi(\varphi)=f_0\cdot\tilde\varphi(0)+\sum_{k=1}^\infty\frac1{k!}
\int\tilde\varphi(x_1+\cdots+x_k)f_k(x_1,\dots,x_k)Z_G(\,dx_1)
\dots Z_G(\,dx_k), \label{($6.1'$)}
\end{equation}
where the functions $f_k$ are invariant under all permutations
of their variables,
$$
f_k(-x_1,\dots,-x_k)=\overline{f_k(x_1,\dots,x_k)},
\quad k=1,2,\dots,
$$
and
\begin{equation}
\sum_{k=1}^\infty \frac1{k!}\int (1+|x_1+\cdots+x_k|^2)^{-p}
|f_k(x_1+\cdots+x_k)|^2G(\,dx_1)\dots G(\,dx_k)<\infty \label{(6.2)}
\end{equation}
with an appropriate number $p>0$. This representation is unique.
Contrariwise, all random fields $\xi(\varphi)$, $\varphi\in{\cal S}$,
defined by formulas~(\ref{($6.1'$)}) and~(\ref{(6.2)})
are subordinated to the stationary, Gaussian random field
$X(\varphi)$,~$\varphi\in{\cal S}$.}
\index{random field (generalized) subordinated to a generalized
stationary Gaussian random field}
\medskip\noindent
{\it Proof of Theorem~6.1$'$}.\/ The proof based on the same
ideas as the proof of Theorem~6.1, but here we also adapt
some arguments from the theory of generalized functions.
(See \cite{r15}.) In particular, we exploit the following
continuity property of generalized random fields and
subordinated generalized random fields. If
$\varphi_n\to\varphi$ in the topology of the Schwartz
space ${\cal S}$, and $X(\varphi)$, $\varphi\in{\cal S}$,
is a generalized random field, then
$X(\varphi_n)\Rightarrow X(\varphi)$
stochastically. If $X(\varphi)$, $\varphi\in{\cal S}$, is
a generalized Gaussian random field, then also the relation
$E[X(\varphi_n)-X(\varphi)]^2\to0$ holds in this case.
Similarly, if $\xi(\varphi)$, $\varphi\in{\cal S}$, is
a subordinated generalized random field, and
$\varphi_n\to\varphi$, then
$E[\xi(\varphi_n)-\xi(\varphi)]^2\to$ by the definition
of subordinated fields.
It can be seen with some work that a random field
$\xi(\varphi)$,~$\varphi\in{\cal S}$, defined
by~(\ref{($6.1'$)}) and~(\ref{(6.2)}) is subordinated
to~$X(\varphi)$. One has to check that the definition
of~$\xi(\varphi)$ in formula~(\ref{($6.1'$)}) is meaningful
for all~$\varphi\in{\cal S}$, because of~(\ref{(6.2)}),
$\xi(T_t\varphi)=T_t\xi(\varphi)$ for all shifts~$T_t$,~$t\in R^\nu$,
by Theorem~4.4, and also the following continuity property holds.
For all $\varepsilon>0$ there is a small neighbourhood~$H$
of the origin in the space~${\cal S}$ such that if
$\varphi=\varphi_1-\varphi_2\in H$ for
some $\varphi_1,\varphi_2\in{\cal S}$ then
$E[\xi(\varphi_1)-\xi(\varphi_2)]^2=E\xi(\varphi)^2<\varepsilon^2$.
Since the Fourier transform
$\varphi(\cdot)\to\tilde\varphi(\cdot)$ is a bicontinuous
map in ${\cal S}$, to prove the above continuity property
it is enough to check that $E\xi(\varphi)^2<\varepsilon^2$
if $\tilde\varphi\in H$ for an appropriate small
neighbourhood~$H$ of the origin in~${\cal S}$. But this
relation holds with the choice
$H=\{\varphi\colon (1+|x|^2)^p|\varphi(x)|\le\frac{\varepsilon^2}K
\textrm{ for all }x\in R^\nu\}$ with a sufficiently
large $K>0$ because of condition~(\ref{(6.2)}).
To prove that all subordinated fields have the above
representation observe that the relation
\begin{equation}
\xi(\varphi)=\Psi_{\varphi,0}+\sum_{k=1}^\infty\frac1{k!}\int
\Psi_{\varphi,k}(x_1,\dots,x_k)Z_G(\,dx_1)\dots Z_G(\,dx_k)
\label{(6.3)}
\end{equation}
holds for all $\varphi\in{\cal S}$ with some
$(\Psi_{\varphi,0},\Psi_{\varphi,1},\dots)\in\,\textrm{Exp}\,{\cal H}_G$
depending on the function~$\varphi$. We are going to show
that these functions $\Psi_{\varphi,k}$ can be given in the form
$$
\Psi_{\varphi,k}(x_1,\dots,x_k)=f_k(x_1,\dots,x_k)
\cdot\tilde\varphi (x_1+\cdots+x_k), \quad k=1,2,\dots,
$$
with some functions $f_k\in{\cal B}^{k\nu}$, and
$$
\Psi_{\varphi,0}=f_0\cdot\tilde\varphi(0)
$$
for all $\varphi\in{\cal S}$ with a sequence of functions
$f_0,f_1,\dots$ not depending on~$\varphi$.
To show this let us choose a $\varphi_0\in{\cal S}$ such
that $\tilde\varphi_0(x)>0$ for all $x\in R^{\nu}$. (We
can make for instance the choice $\varphi_0(x)=e^{-(x,x)}$.)
We claim that the finite linear combinations
$\sum a_p\varphi_0(x-t_p)=\sum a_pT_{t_p}\varphi_0(x)$ are
dense in ${\cal S}$. To prove this it is enough to show that
the functions $\psi$ whose Fourier transforms $\tilde\psi$
have a compact support can well be approximated by such
linear combinations, because these functions $\psi$ are
dense in ${\cal S}$. (The statement that these functions
$\psi$ are dense in ${\cal S}$ is equivalent to the
statement that their Fourier transform $\tilde\psi$ are
dense in the space $\tilde{{\cal S}}\subset{\cal S}^c$
consisting of the Fourier transforms of the (real valued)
functions in the space~${\cal S}$.) We have
$\frac{\tilde\psi}{\tilde\varphi_0}\in{\cal S}^c$ for such
functions~$\psi$, where ${\cal S}^c$ denotes the
Schwartz-space of complex valued, at infinity strongly
decreasing, smooth functions again, because
$\tilde\varphi_0(x)\neq0$, and $\tilde\psi$ has a compact
support. There exists a function $\chi\in{\cal S}$ such
that $\tilde\chi=\frac{\tilde\psi}{\tilde\varphi_0}$.
(Here we exploit that the space of Fourier transforms of
the functions from ${\cal S}$ agrees with the space of
those functions $f\in{\cal S}^c$ for which
$f(-x)=\overline{f(x)}$.) Therefore
$\psi(x)=\chi*\varphi_0(x)=\int \chi(t)\varphi_0(x-t)\,dt$,
where $*$ denotes convolution. It can be seen by
exploiting this relation together with the rapid
decrease of~$\chi$ and~$\varphi_0$ together of its
derivatives at infinity, and approximating the
integral defining the convolution by an appropriate
finite sum that for all integers $r>0$, $s>0$ and
real numbers~$\varepsilon>0$ there exists a finite
linear combination
$\hat\psi(x)=\hat\psi_{r,s,\varepsilon}(x)
=\sum\limits_p a_p\varphi_0(x-t_p)$ such
that $(1+|x|^s)|\psi(x)-\hat\psi(x)|<\varepsilon$
for all $x\in R^\nu$, and the same estimate holds for
all derivatives of $\psi(x)-\hat\psi(x)$ of order
less than~$r$.
I only briefly explain why such an approximation exists.
Some calculation enables us to reduce this statement to
the case when $\psi=\chi*\varphi_0$ with a function
$\chi\in{\cal D}$, which has compact support. To give
the desired approximation choose a small number
$\delta>0$, introduce the cube
$\Delta=\Delta(\delta)=[-\delta,\delta)^\nu\subset R^\nu$
and define the vectors
$k(\delta)=(2k_1\delta,\dots,2k_\nu\delta)\in R^\nu$
for all $k=(k_1,\dots,k_\nu)\in\textrm{\BBB Z}_\nu$.
Given a fixed vector $x\in R^\nu$ let us define the vector
$u(x)\in R^\nu$ for all $u\in R^\nu$ as $u(x)=x+k(\delta)$
with that vector $k\in\textrm{\BBB Z}_\nu$ for which
$x+k(\delta)-u\in\Delta$, and put
$\varphi_{0,x}(u)=\varphi_0(u(x))$. It can be seen that
$\hat\psi(x)=\chi*\varphi_{0,x}(x)$ is a finite linear
combination of numbers of the form $\varphi_0(x-t_k)$
(with $t_k=k(\delta)$) with coefficients not depending
on~$x$. Moreover, if $\delta>0$ is chosen sufficiently
small (depending on $r,s$ and~$\varepsilon$), then
$\hat\psi(x)=\hat\psi_{r,s,\varepsilon}(x)$ has all
properties we demanded.
The above argument implies that there is a sequence of
functions $\hat\psi_{r,s,\varepsilon}$ which converges to
the function~$\psi$ in the topology of the space~${\cal S}$.
As a consequence, the finite linear combinations
$\sum a_p\varphi_0(x-t_p)$ are dense in~${\cal S}$.
Define
$$
f_k(x_1,\dots,x_k)
=\frac{\Psi_{\varphi_0,k}(x_1,\dots,x_k)}
{\tilde\varphi_0(x_1+\cdots+x_k)},
\quad k=1,2,\dots,\quad \textrm{and }
f_0=\frac{\Psi_{\varphi_0,0}}{\tilde\varphi_0(0)}.
$$
If $\varphi(x)=\sum a_p\varphi_0(x-t_p)=\sum a_pT_{t_p}\varphi_0(x)$,
and the sum defining~$\varphi$ is finite, then by Theorem~4.4
\begin{eqnarray*}
\xi(\varphi)&&=\left(\sum a_p\right)f_0\cdot\tilde\varphi_0(0)
+\sum_{k=1}^\infty\frac1{k!}\int
\sum_p a_pe^{i(t_p,x_1+\cdots+x_k)}
\tilde\varphi_0(x_1+\cdots+x_k) \\
&&\qquad\qquad\cdot f_k(x_1,\dots,x_k)Z_G(\,dx_1)\dots Z_G(\,dx_k)\\
&&=f_0\cdot\tilde\varphi(0)+\sum_{k=1}^\infty\frac1{k!}\int
\tilde\varphi(x_1+\cdots+x_k)
f_k(x_1,\dots,x_k)Z_G(\,dx_1)\dots Z_G(\,dx_k).
\end{eqnarray*}
Relation~(\ref{(6.3)}) holds for all $\varphi\in{\cal S}$,
and there exists a sequence
$\varphi_j(x)=\sum\limits_p a_p^{(j)}\varphi_0(x-t_p^{(j)})\in{\cal S}$
satisfying~(\ref{($6.1'$)}) such that $\varphi_j\to\varphi$
in the topology of~${\cal S}$. This implies that
$\lim E[\xi(\varphi_j)-\xi(\varphi)]^2\to0$, and in particular
$EI_G(\Psi_{\varphi,k}-\hat\varphi_{j,k}f_k)^2\to0$ with
$\hat\varphi_{j,k}(x_1,\dots,x_k)=\tilde\varphi_j(x_1+\cdots+x_k)$
as $j\to\infty$ for all $k=1,2,\dots$. (To carry out some further
argument we restrict the domain of integration to a bounded
set~$A$.) We get that
$$
\int_A|\Psi_{\varphi,k}(x_1,\dots,x_k)-\tilde\varphi_j(x_1+\cdots+x_k)
f_k(x_1,\dots,x_k)|^2G(\,dx_1)\dots G(\,dx_k)\to0
$$
as $j\to\infty$ for all $k$ and for all bounded sets
$A\in R^{k\nu}$. On the other hand,
$$
\int_A|\tilde\varphi(x_1+\cdots+x_k)-\tilde\varphi_j(x_1+\cdots+x_k)|^2
|f_k(x_1,\cdots,x_k)|^2G(\,dx_1)\dots G(\,dx_k)\to0,
$$
since $\tilde\varphi_j(x)-\tilde\varphi(x)\to0$ in the supremum
norm if $\tilde\varphi_j\to\tilde\varphi$ in the topology
of ${\cal S}$, and the property $\tilde\varphi_0(x)>0$ (of the
function $\tilde\varphi_0$ appearing in the definition of the
function $f_k$) together with the continuity of
$\tilde\varphi_0$ and the inequality
$EI_G(\hat\varphi_{0,k}f_k)^2<\infty$ imply that
$\int_A |f_k(x_1,\dots,x_k)|^2G(\,dx_1)\dots G(dx_k)<\infty$
on all bounded sets~$A$. The last two relations yield that
$$
\Psi_{\varphi,k}(x_1,\dots,x_k)
=\tilde\varphi(x_1+\cdots+x_k)f_k(x_1,\dots,x_k),
\quad k=1,2,\dots,
$$
since both sides of this identity is the limit of the sequence
$$
\tilde\varphi_j(x_1+\cdots+x_k)f_k(x_1,\dots,x_k), \quad j=1,2,\dots
$$
in the $L^2_{G^k_A}$ norm, where $G^k_A$ denotes the restriction of
the measure~$G^k$ to the set~$A$.
Similarly,
$$
\psi_{\varphi,0}=\tilde\varphi(0)f_0.
$$
These relations imply~(\ref{($6.1'$)}).
To complete the proof of Theorem~$6.1'$ we show that~(\ref{(6.2)})
follows from the continuity of the transformation
$F\colon\;\varphi\to \xi(\varphi)$ from the space
${\cal S}$ into the space $L^2(\Omega,{\cal A},P)$.
We recall that the transformation $\varphi\to\tilde\varphi$ is
bicontinuous in ${\cal S}^c$. Hence for a subordinated field
$\xi(\varphi)$, $\varphi\in{\cal S}$, the transformation
$\tilde\varphi\to\xi(\varphi)$ is a continuous map from the
space of the Fourier transforms of the functions in the
space~${\cal S}$ to $L^2(\Omega,{\cal A},P)$. This continuity
implies that there exist some integers $p>0$, $r>0$ and real
number $\delta>0$ such that if
\begin{equation}
(1+|x^2|)^p\left|\frac{\partial^{s_1+\cdots+s_\nu}}
{\partial {x^{(1)}}^{s_1}
\dots\partial {x^{(\nu)}}^{s_\nu}}\tilde\varphi(x)\right|<\delta
\quad\textrm{for all } s_1+\cdots+s_\nu\le r, \label{(6.4)}
\end{equation}
then $E\xi(\varphi)^2\le1$.
Let us choose a function $\psi\in{\cal S}$ such that $\psi$
has a compact support, $\psi(x)=\psi(-x)$, $\psi(x)\ge0$
for all $x\in R^\nu$, and $\psi(x)=1$ if $|x|\le 1$. (There
exist such functions.) Define the functions
$\tilde\varphi_m(x)=C(1+|x|^2)^{-p}\psi(\frac xm)$.
Then $\varphi_m\in{\cal S}$, since its Fourier transform
$\tilde\varphi_m$ is an even function, and it is in the
space~${\cal S}$ being an infinite many times
differentiable function with compact support. Moreover,
$\varphi_m$ satisfies~(\ref{(6.4)}) for all $m=1,2,\dots$
if the number $C>0$ in its definition is chosen
sufficiently small. This number~$C$ can be chosen
independently of~$m$. (To see this observe that
$(1+|x^2|)^{-p}$ together with all of its derivatives
of order not bigger than~$r$ can be bounded by
$\frac{C(p,r)}{(1+|x|^2)^p}$ with an appropriate
constant~$C(p,r)$.) Hence
$$
E\xi(\varphi_m)^2=\sum\frac1{k!}
\int|\tilde\varphi_m(x_1+\cdots+x_k)|^2
|f_k(x_1,\cdots,x_k)|^2G(dx_1)\dots G(dx_k)\le1
$$
for all $m=1,2,\dots$.
As $\tilde\varphi_m(x)\to C(|1+|x|^2)^{-p}$ as $m\to\infty$,
and $\tilde\varphi_k(x)\ge0$, an $m\to\infty$ limiting
procedure in the last relation together with Fatou's
lemma imply that
$$
C\sum\frac1{k!}\int(1+|x_1+\cdots+x_k)|^2)^{-p}
|f_k(x_1,\cdots,x_k)|^2G(dx_1)\dots G(dx_k)\le1.
$$
Theorem~$6.1'$ is proved. \hfill$\qed$
\medskip
We shall call the representations given in Theorems~6.1
and~$6.1'$ the canonical representation of a subordinated field.
\index{canonical representation of a subordinated field}
From now on
we restrict ourselves to the case $E\xi_n=0$ or $E\xi(\varphi)=0$
respectively, i.e. to the case when $f_0=0$ in the canonical
representation. If
$$
\xi(\varphi)=\sum_{k=1}^\infty\frac1{k!}\int\tilde\varphi(x_1+\cdots+x_k)
f_k(x_1,\dots,x_k)Z_G(\,dx_1)\dots Z_G(\,dx_k),
$$
then
$$
\xi(\varphi^A_t)=\sum_{k=1}^\infty\frac1{k!}\frac{t^\nu}{A(t)}
\int\tilde\varphi(t(x_1+\cdots+x_k))
f_k(x_1,\dots,x_k)Z_G(\,dx_1)\dots Z_G(\,dx_k)
$$
with the function $\varphi^A_t$ defined in~(\ref{(1.3)}). Define
the spectral measures $G_t$ by the formula $G_t(A)=G(tA)$. Then
we have by Lemma~4.6
$$
\xi(\varphi^A_t)\stackrel{\Delta}{=}\sum_{k=1}^\infty\frac1{k!}
\frac{t^\nu}{A(t)}
\int\tilde\varphi(x_1+\cdots+x_k)
f_k\left(\frac{x_1}t,\dots,\frac{x_k}t\right)
Z_{G_t}(\,dx_1)\dots Z_{G_t}(\,dx_k).
$$
If $G(tB)=t^{2\kappa}G(B)$ with some $\kappa>0$ for all $t>0$ and
$B\in{\cal B}^\nu$, $f_k(\lambda x_1,\dots,\lambda x_k)=
\lambda^{\nu-\kappa k-\alpha}f_k(x_1,\dots,x_k)$, and $A(t)$ is
chosen as $A(t)=t^\alpha$, then Theorem~4.5 (with the choice
$G'(B)=G(tB)=t^{2\kappa}G(B)$) implies that
$\xi(\varphi_t^A)\stackrel{\Delta}{=}\xi(\varphi)$.
Hence we obtain the following
\medskip\noindent
{\bf Theorem 6.2.} {\it Let a generalized random
field $\xi(\varphi)$ be given by the formula
\begin{equation}
\xi(\varphi)=\sum_{k=1}^\infty\frac1{k!}\int \tilde\varphi
(x_1+\cdots+x_k)f_k(x_1,\dots,x_k)Z_G(\,dx_1)\dots Z_G(\,dx_k).
\label{(6.5)}
\end{equation}
If $f_k(\lambda x_1,\dots,\lambda x_k)=\lambda^{\nu-\kappa k-\alpha}
f_k(x_1,\dots,x_k)$ for all~$k$, $(x_1,\dots,x_k)\in R^{k\nu}$ and
$\lambda>0$, $G(\lambda A)=\lambda^{2\kappa}G(A)$ for all
$\lambda>0$ and $A\in{\cal B}^\nu$, then $\xi$ is a self-similar
random field with parameter $\alpha$.}
\medskip
The discrete time version of this result can be proved in the same
way. It states the following
\medskip\noindent
{\bf Theorem 6.2$'$.} {\it If a discrete random field $\xi_n$,
$n\in\textrm{\BBB Z}_\nu$, has the form
\begin{equation}
\xi_n=\sum_{k=1}^\infty \frac1{k!}\int\tilde\chi_n(x_1+\cdots+x_k)
f_k(x_1,\dots,x_k)Z_G(\,dx_1)\dots Z_G(\,dx_k), \quad
n\in\textrm{\BBB Z}_\nu, \label{($6.5'$)}
\end{equation}
and $f_k(\lambda x_1,\dots,\lambda x_k)=\lambda^{\nu-\kappa k-\alpha}
f_k(x_1,\dots,x_k)$ for all~$k$, $G(\lambda A)=\lambda^{2\kappa}G(A)$,
then $\xi_n$ is a self-similar random field with parameter~$\alpha$.}
\medskip
Theorems~6.2 and~$6.2'$ enable us to construct self-similar random
fields. Nevertheless, we have to check whether formulas~(\ref{(6.5)})
and~(\ref{($6.5'$)}) are meaningful. The hard part of this problem is
to check whether
$$
\sum \frac1{k!}\int |\tilde\chi_n(x_1+\cdots+x_k)|^2
|f_k(x_1,\dots,x_k)|^2 G(\,dx_1)\dots G(\,dx_k)<\infty,
$$
or whether
$$
\sum \frac1{k!}\int |\tilde\varphi(x_1+\cdots+x_k)|^2
|f_k(x_1,\dots,x_k)|^2 G(\,dx_1)\dots G(\,dx_k)<\infty
\quad \textrm{for all }\varphi\in{\cal S}.
$$
To investigate when these expressions are finite is a rather hard
problem in the general case. The next result enables us to prove
the finiteness of these expressions in some interesting cases.
Let us define the measure~$G$
\begin{equation}
G(A)=\int_A |x|^{2\kappa-\nu}a\left(\frac x{|x|}\right)\,dx,
\quad A\in{\cal B}^\nu,
\label{(6.6)}
\end{equation}
where $a(\cdot)$ is a non-negative, measurable and even
function on the $\nu$-dimensional unit sphere $S_{\nu-1}$,
and $\kappa>0$. (The condition $\kappa>0$ is imposed to
guarantee the relation $G(A)<\infty$ for all bounded sets
$A\in{\cal B}^\nu$.) We prove the following
\medskip\noindent
{\bf Proposition 6.3.} {\it Let the measure $G$ be the
same as in formula~(\ref{(6.6)}).
\medskip
\begin{description}
\item[(a)] If the function $a(\cdot)$ is bounded on the
unit sphere $S_{\nu-1}$, and $\frac \nu k>2\kappa>0$, then
$$
D(n)=\int|\tilde\chi_n(x_1+\cdots+x_k)|^2
G(\,dx_1)\dots G(\,dx_k)<\infty
\quad \textrm{for all \ } n\in\textrm{\BBB Z}_\nu,
$$
and
\begin{eqnarray*}
D(\varphi)&&=\int|\tilde\varphi(x_1+\cdots+x_k)|^2
G(\,dx_1)\dots G(\,dx_k)\\
&&\le C\int(1+|x_1+\cdots+x_k)|^2)^{-p}
G(\,dx_1)\dots G(\,dx_k)<\infty
\end{eqnarray*}
for all $\varphi\in{\cal S}$ and $p>\frac\nu2$ with some
$C=C(\varphi,p)<\infty$.
\item[(b)] If there is a constant $C>0$ such that $a(x)>C$ in a
neighbourhood of a point $x_0\in S_{\nu-1}$, and either
$2\kappa\le0$ or $2\kappa\ge\frac\nu k$, then the integrals $D(n)$
are divergent, and the same relation holds for $D(\varphi)$
with some $\varphi\in{\cal S}$.
\end{description}
}
\medskip\noindent
{\it Proof of Proposition 6.3.}\/ {\it Proof of Part (a)}
\noindent
We may assume that $a(x)=1$ for all $x\in S_{\nu-1}$. Define
$$
J_{\kappa,k}(x)=\int_{x_1+\cdots+x_k=x}
|x_1|^{2\kappa-\nu}\cdots|x_k|^{2\kappa-\nu}
\,dx_1\dots\,dx_k, \quad x\in R^{\nu},
$$
for $k\ge2$, where $\,dx_1\dots\,dx_k$ denotes the Lebesgue
measure on the hyperplane $x_1+\cdots+x_k=x$, and let
$J_{\kappa,1}(x)=|x|^{2\kappa-\nu}$. We have
$$
J_{\kappa,k}(\lambda x)
=|\lambda|^{k(2\kappa-\nu)+(k-1)\nu}J_{\kappa,k}(x),
=|\lambda|^{2k\kappa-\nu}J_{\kappa,k}(x),
\quad x\in R^\nu\;\; \lambda>0,
$$
because of the homogeneity of the integral. We can write, because
of (\ref{(6.6)}) with $a(x)\equiv1$
\begin{equation}
D(n)=\int_{R^\nu}|\tilde\chi_n(x)|^2 J_{\kappa,k}(x)\,dx,
\label{(6.7)}
\end{equation}
and
$$
D(\varphi)=\int_{R^\nu}|\tilde\varphi(x)|^2J_{\kappa,k}(x)\,dx.
$$
%&&\int (1+|x_1+\cdots+x_k|^2)^{-p}G(\,dx_1)\dots G(\,dx_k)
%=\int (1+|x|^2)^{-p} J_{\kappa,k}(x)\,dx. \nonumber
%\end{eqnarray}
We prove by induction on $k$ that
\begin{equation}
J_{\kappa,k}(x)\le C(\kappa,k)|x|^{2\kappa k-\nu} \label{(6.8)}
\end{equation}
with an appropriate constant $C(\kappa,k)<\infty$ if
$\frac\nu k>2\kappa>0$.
We have
$$
J_{\kappa,k}(x)=\int J_{\kappa,k-1}(y)|x-y|^{2\kappa-\nu}\,dy.
$$
Hence
\begin{eqnarray*}
J_{\kappa,k}&\le& C(\kappa,k-1)\int
|y|^{(2\kappa(k-1)-\nu}|x-y|^{2\kappa-\nu}\,dy\\
&=& C(\kappa,k-1)|x|^{2\kappa k-\nu}
\int |y|^{(2\kappa(k-1)-\nu}\left|\frac x{|x|}-y\right|^{2\kappa-\nu}\,dy
=C(\kappa,k)|x|^{2\kappa k-\nu},
\end{eqnarray*}
since $\int |y|^{(2\kappa(k-1)-\nu}
\left|\frac x{|x|}-y\right|^{2\kappa-\nu}\,dy<\infty$.
The last integral is finite, since its integrand behaves at zero
asymptotically as $C|y|^{2\kappa(k-1)-\nu}$, at the point
$e=\frac x{|x|}\in S_{\nu-1}$ as $C_2|y-e|^{2\kappa-\nu}$ and at
infinity as $C_3|y|^{2\kappa k-2\nu}$. Relations~(\ref{(6.7)})
and~(\ref{(6.8)}) imply that
\begin{eqnarray*}
D(n)&\le& C'\int |\tilde\chi_0(x)|^2|x|^{2\kappa k-\nu}\,dx
\le C''\int |x|^{2\kappa k-\nu}\prod_{l=1}^\nu\frac1{1+|x^{(l)}|^2}\,dx\\
&\le& C'''\int_{|x^{(1)}|=\max\limits_{1\le l\le\nu}|x^{(l}|}
|x^{(1)}|^{2\kappa k-\nu}\prod_{l=1}^\nu\frac1{1+|x^{(l)}|^2}\,dx\\
&=&\sum_{p=0}^\infty C'''
\int_{|x^{(1)}|=\max\limits_{1\le l\le\nu}|x^{(l}|,
\;2^p\le |x^{(1)}|<2^{p+1}}
+C'''\int_{|x^{(1)}|=\max\limits_{1\le l\le\nu}|x^{(l}|, |x^{(1)}|<1}.
\end{eqnarray*}
The second term in the last sum can be simply bounded by a constant,
since $B=\left\{x\colon\;|x^{(1)}|=\max\limits_{1\le l\le\nu}|x^{(l}|, \;
|x^{(1)}|<1\right\}\subset\{x\colon\;|x|\le\sqrt \nu\}$,
and $|x^{(1)}|^{2\kappa k-\nu}\prod\limits_{l=1}^\nu\frac1{1+|x^{(l)}|^2}
\le\textrm{const.}\, |x|^{2\kappa k-\nu}$ on the set~$B$. Hence
$$
D(n)\le C_1\sum_{p=0}^\infty 2^{p(2\kappa k-\nu)}
\left[\int_{-\infty}^\infty\frac 1{1+x^2}\,dx\right]^\nu+C_2<\infty.
$$
We have $|\varphi(x)|\le C(1+|x^2|)^{-p}$ with some $C>0$ and $D>0$
if $\varphi\in{\cal S}$.
The proof of the estimate $D(\varphi)<\infty$ for $\varphi\in{\cal S}$
is similar but simpler.
\medskip\noindent
{\it Proof of Part (b).}\/ Define, similarly to the function
$J_{\kappa,k}$ the function
$$
J_{\kappa,k,a}(x)=\int_{x_1+\cdots+x_k=x}
|x_1|^{2\kappa-\nu}a\left(\frac{x_1}{|x_1|}\right)
\cdots|x_k|^{2\kappa-\nu} a\left(\frac{x_k}{|x_k|}\right)
\,dx_1\dots\,dx_k, \quad x\in R^{\nu},
$$
where $\,dx_1\dots\,dx_k$ denotes the Lebesgue measure on the
hyperplane $x_1+\cdots+x_k=x$. Since
$$
J_{\kappa,k,a}(x)\ge\int_{y\colon\; |y|<(\frac12+\alpha)|x|,\;
|y-x|<(\frac12+\alpha)|x|} J_{\kappa,k-1,a}(y)
a\left(\frac{x-y}{|x-y|}\right)|x-y|^{2\kappa-\nu}\,dy
$$
with an arbitrary $\alpha>0$ an argument similar to the one in
Part~(a) shows that
$$
J_{\kappa,k,a}(x)\left\{
\begin{array}{l}
\ge \bar C(\kappa,k)|x|^{2\kappa k-\nu}
\quad\textrm{if }\frac\nu k>2\kappa>0,\\
=\infty\qquad \textrm{if }\kappa\le0
\textrm{ or } 2\kappa\ge\frac\nu k
\end{array}\right.
$$
if $\frac x{|x|}$ is close to such a point~$x_0\in S_{\nu-1}$
in whose small neighbourhood the function $a(\cdot)$ is separated
from zero. Since $|\tilde\chi_n(x)|^2>0$ for almost all $x\in R^\nu$,
$$
D(n)=\int|\tilde \chi_n(x)|^2J_{\kappa,k,a}(x)\,dx=\infty
$$
under the conditions of Part~(b). Similarly $D(\varphi)=\infty$
if $|\tilde\varphi(x)|^2>0$ for almost all $x\in R^\nu$. We remark
that the conditions in Part~(b) can be weakened. It would have been
enough to assume that $a(x)>0$ on a set of positive Lebesgue
measure in~$S_{\nu-1}$. \hfill$\qed$
\medskip
Theorem 6.2 and $6.2'$ together with Proposition 6.3 have the
following
\medskip\noindent
{\bf Corollary 6.4.} {\it The formulae
\begin{eqnarray*}
\xi_n=\sum_{k=1}^M \int\tilde\chi_n(x_1+\cdots+x_k)
&&\prod_{l=1}^k \left(|x_l|^{-\kappa+(\nu-\alpha)/k}
\cdot b_k\left(\frac{x_l}{|x_l|}\right)\right) \\
&&\qquad Z_G(\,dx_1)\dots Z_G(\,dx_k),
\qquad n\in\textrm{\BBB Z}_\nu,
\end{eqnarray*}
and
\begin{eqnarray*}
\xi(\varphi)=\sum_{k=1}^M \int\tilde\varphi(x_1+\cdots+x_k)
&&\prod_{l=1}^k\left(|x_l|^{-\kappa+(\nu-\alpha)/k}\cdot
b_k\left(\frac{x_l}{|x_l|}\right)\right) \\
&&\qquad Z_G(\,dx_1)\dots Z_G(\,dx_k),
\qquad \varphi\in{\cal S},
\end{eqnarray*}
define self-similar random fields with self-similarity
parameter~$\alpha$ if $G$ is defined by formula~(\ref{(6.6)}),
the parameter~$\alpha$ satisfies the inequality
$\frac\nu2<\alpha<\nu$, and the functions $a(\cdot)$ (in
the definition of the measure~$G(\cdot)$ in~(\ref{(6.6)}),
$b_1(\cdot)$,\dots $b_k(\cdot)$ are bounded even
functions on $S_{\nu-1}$.}
\medskip
The following observation may be useful when we want to
prove Corollary~6.4. We can replace $\xi_n$ by another
random field with the same distribution. Thus we can
write, by exploiting Theorem~4.5,
$$
\xi_n=\sum_{k=1}^M\tilde\chi_n(x_1+\cdots+x_k)
Z_{G'}(\,dx_1)\dots Z_{G'}(\,dx_k),
\quad n\in\textrm{\BBB Z}_\nu,
$$
with random spectral measure $Z_{G'}$ corresponding to
the spectral measure
$G'(\,dx)=b(\frac x{|x|})^2|x|^{-2\kappa+2(\nu-\alpha)/k}G(\,dx)
=a(\frac x{|x|})b(\frac x{|x|})^2|x|^{-\nu+2(\nu-\alpha)/k}\,dx$.
In the case of generalized random fields a similar
argument can be applied.
\medskip\noindent
{\it Remark 6.5.} The estimate on $J_{\kappa,k}$ and the end of
the of Part~(a) in Proposition~6.3 show that the self-similar
random field
\begin{eqnarray*}
\xi(\varphi)&&=\sum_{k=1}^M\int \tilde\varphi(x_1+\cdots+x_k)
|x_1+\cdots+x_k|^p \, u
\left(\frac{x_1+\cdots+x_k}{|x_1+\cdots+x_k|}\right)\\
&&\qquad \prod_{l=1}^k\left(|x_l|^{-\kappa+(\nu-\alpha)/k}\cdot
b_k\left(\frac{x_l}{|x_l|}\right)\right)
Z_G(\,dx_1)\dots Z_G(\,dx_k),
\quad \varphi\in{\cal S},
\end{eqnarray*}
and
\begin{eqnarray*}
\xi_n&&=\sum_{k=1}^M\int \tilde\chi_n(x_1+\cdots+x_k)
|x_1+\cdots+x_k|^p \, u
\left(\frac{x_1+\cdots+x_k}{|x_1+\cdots+x_k|}\right)\\
&&\qquad \prod_{l=1}^k\left(|x_l|^{-\kappa+(\nu-\alpha)/k}\cdot
b_k\left(\frac{x_l}{|x_l|}\right)\right)
Z_G(\,dx_1)\dots Z_G(\,dx_k),
\quad n\in\textrm{\BBB Z}_\nu,
\end{eqnarray*}
are well defined if $G$ is defined by formula~(\ref{(6.6)}),
$a(\cdot)$, $b(\cdot)$ and $u(\cdot)$ are bounded even
functions on $S_{\nu-1}$, $\frac\nu2<\alpha<\nu$, and
$\alpha-p<\nu$ in the generalized and
$\frac{\nu-1}2<\alpha-p<\nu$ is the discrete random field
case. The self-similarity parameter of these random fields
is $\alpha-p$. We remark that in the case $p>0$ this
class of self-similar fields also contains self-similar
fields with self-similarity parameter less than~$\frac\nu2$.
\medskip
In proving the statement of Remark~6.5 we have to check the
integrability conditions needed for the existence of the
Wiener--It\^o integrals $\xi(\varphi)$ and $\xi_n$. To check
them it is worth remarking that in the proof of Part~(a) of
Proposition~6.3 we proved the estimate $J_{\bar\kappa,k}(x)\le
C(\bar\kappa,k)|x|^{2\bar\kappa k-\nu}$. We want to apply this
inequality in the present case with the choice
$\bar\kappa=\frac{\nu-\alpha}k$. Then arguing similarly to
the proof of Part~(a) of Proposition~6.3 we get to the
problem whether the relations
$\int|\tilde\chi_n(x)|^2|x|^{2p+2(\nu-\alpha)-\nu}\,dx<\infty$
and
$\int|\tilde\varphi(x)|^2|x|^{2p+2(\nu-\alpha)-\nu}\,dx<\infty$
if $\varphi\in{\cal S}$ hold under the conditions of
Remark~6.5. They can be proved by means of the argument
applied at the end of the proof of Part~(a) of Proposition~6.3.
\medskip
The following question arises in a natural way. When do different
formulas satisfying the conditions of Theorem~6.2 or Theorem~$6.2'$
define self-similar random fields with different distributions?
In particular: Are the self-similar random fields constructed via multiple
Wiener--It\^o integrals necessarily non-Gaussian? We cannot give a
completely satisfactory answer for the above question, but our
former results yield some useful information. Let us substitute the
spectral measure $G$ by $G'$ such that
$\frac{G(\,dx)}{G'(\,dx)}=|g^2(x)|^2$, $g(-x)=\overline{g(x)}$ and the
functions $|x_l|^{-\kappa+(\nu-\alpha)/k}b(\frac{x_l}{|x_l|})$ by
$b(\frac{x_l}{|x_l|})g(x_l)|x_l|^{-\kappa+(\nu-\alpha)/k}$ in
Corollary~6.4. By Theorem~4.4 the new field has the same distribution
as the original one. On the other hand, Corollary~5.4 helps us to
decide whether two random variables have different moments, and
therefore different distributions. Let us consider e.g. a moment of
odd order of the random variables $\xi_n$ or $\xi(\varphi)$
defined in Corollary~6.4. It is clear that all $h_\gamma\ge0$.
Moreover, if $b_k(x)$ does not vanish for some even number~$k$, then
there exists a $h_\gamma>0$ in the sum expressing an odd moment of
$\xi_n$ or $\xi(\varphi)$. Hence the odd moments of $\xi_n$ or
$\xi(\varphi)$ are positive in this case. This means in particular
that the self-similar random fields defined in Corollary~6.4 are
non-Gaussian if $b_k$ is non-vanishing for some even~$k$. The next
result shows that the tail behaviour of multiple Wiener--It\^o
integrals of different order is different.
\medskip\noindent
{\bf Theorem 6.6.} {\it Let $G$ be a spectral measure and $Z_G$ a
random spectral measure corresponding to~$G$. For all $h\in{\cal H}_G^m$
there exist some constants $K_1>K_2>0$ and $x_0>0$ depending on
the function~$h$ such that
$$
e^{-K_1x^{2/m}}\le P(|I_G(h)|>x)\le e^{-K_2x^{2/m}}
$$
for all $x>x_0$.}
\medskip\noindent
{\it Remark.}\/ As the proof of Theorem~6.6 shows the constant~$K_2$
in the upper bound of the above estimate can be chosen as
$K_m=C_m (EI_G(h)^2)^{-1/m}$ with a constant~$C_m$ depending only on
the order~$m$ of the Wiener--It\^o integral of~$I_G(h)$. This means
that for a fixed number~$m$ the constant~$K_2$ in the above estimate
can be chosen as a constant depending only on the variance of the
random variable~$I_G(h)$. On the other hand, no simple
characterization of the constant~$K_1>0$ appearing in the lower
bound of this estimate is known.
\medskip\noindent
{\it Proof of Theorem 6.6.} {\it (a) Proof of the upper estimate.}
We have
$$
P(|I_G(h)|>x)\le x^{2N}E(I_G(h)|^{2N}).
$$
By Corollary~5.6
$$
E(I_G(h)|^{2N})\le \bar C(m,N)[E(I_G(h)^2)]^N\le \bar C(m,N) C_1^N,
$$
and by a simple combinatorial argument we obtain that
$$
\bar C(m,N)\le\frac{(2Nm-1)(2Nm-3)\cdots 1}{(m!\,)^N},
$$
since the numerator on the right-hand side of this inequality
equals the number of complete diagrams
$|\bar\Gamma(\underbrace {m,\dots,m}_{2N \textrm{ times }})|$
if vertices from the same row can also be connected. Multiplying the
inequalities
$$
(2nM-2j-1)(2Nm-2j-1-2N)\cdots (2Nm-2j-1-2N(m-1))\le (2N)^mm!,
$$
$j=1,\dots,N$, we obtain that
$$
\bar C(m,N)\le (2N)^{mN}.
$$
(This inequality could be sharpened, but it is sufficient for our
purpose.) Choose a sufficiently small number $\alpha>0$, and define
$N=[\alpha x^{2/m}]$, where $[\cdot]$ denotes integer part. With
this choice we have
$$
P(|I_G(h)|>x)\le(x^{-2}(2\alpha)^mx^2)^N C_1^N=[C_1(2\alpha)^m]^N
\le e^{-K_2x^{2/m}},
$$
if $\alpha$ is chosen in such a way that $C_1(2\alpha)^m\le\frac1e$,
$K_2=\frac\alpha2$, and $x>x_0$ with an appropriate $x_0>0$.
\medskip\noindent
{\it (b) Proof of the lower estimate.}
First we reduce this inequality to the following statement. Let
$Q(x_1,\dots,x_k)$ be a homogeneous polynomial of order~$m$ (the
number~$k$ is arbitrary), and $\xi=(\xi_1,\dots,\xi_k)$ a
$k$-dimensional standard normal variable. Then
\begin{equation}
P(Q(\xi_1,\dots,\xi_k)>x)\ge e^{-Kx^{2/m}} \label{(6.9)}
\end{equation}
if $x>x_0$, where the constants $K>0$ and $x_0>0$ may depend on the
polynomial~$Q$.
By the results of Chapter~4, $I_G(h)$ can be written in the form
\begin{equation}
I_G(h)=\sum_{j_1+\cdots+j_l=m} C^{k_1,\dots,k_l}_{j_1,\dots,j_l}
H_{j_1}(\xi_{k_1})\cdots H_{j_k}(\xi_{k_l}),
\label{(6.10)}
\end{equation}
where $\xi_1,\xi_2,\dots$ are independent standard normal
random variables, $C^{k_1,\dots,k_l}_{j_1,\dots,j_l}$ are
appropriate coefficients, and the right-hand side of~(\ref{(6.10)})
is convergent in~$L_2$ sense. Let us fix a sufficiently large
integer~$k$, and let us consider the conditional distribution
of the right-hand side of~(\ref{(6.10)}) under the condition
$\xi_{k+1}=x_{k+1},\xi_{k+2}=x_{k+2},\dots$, where the numbers
$x_{k+1},x_{k+2},\dots$ are arbitrary. This conditional
distribution coincides with the distribution of the random variable
$Q(\xi_1,\dots,\xi_k,x_{k+1},x_{k+2},\dots)$ with probability~1,
where the polynomial $Q$ is obtained by substituting
$\xi_{k+1}=x_{k+1}$, $\xi_{k+2}=x_{k=2},\dots$ into the
right-hand side of~(\ref{(6.10)}). It is clear that all these
polynomials $Q(\xi_1,\dots,\xi_k,x_{k+1},x_{k+2},\dots)$ are
of order $m$ if $k$ is sufficiently large. It is sufficient
to prove that
$$
P(|Q(\xi_1,\dots,\xi_k,x_{k+1},x_{k+2},\dots)|>x)\ge e^{-Kx^{2/m}}
$$
for $x>x_0$, where the constants $K>0$ and $x_0>0$ may depend on
the polynomial~$Q$. Write
$$
Q(\xi_1,\dots,\xi_k,x_{k+1},x_{k+2},\dots)=
Q_1(\xi_1,\dots,\xi_k)+Q_2(\xi_1,\dots,\xi_k)
$$
where $Q_1$ is a homogeneous polynomial of order~$m$, and~$Q_2$ is a
polynomial of order less than~$m$. The polynomial $Q_2$ can be
rewritten as the sum of finitely many Wiener--It\^o integrals with
multiplicity less than~$m$. Hence the already proved part of
Theorem~6.6 implies that
$$
P(Q_2(\xi_1,\dots,\xi_k)>x)\le e^{-\bar qKx^{2/(m-1)}} .
$$
(We may assume that $m\ge2$). Then an application of
relation~(\ref{(6.9)})
to~$Q_1$ implies the remaining part of Theorem~6.6, thus it
suffices to prove~(\ref{(6.9)}).
\medskip
If $Q(\cdot)$ is a polynomial of $k$ variables, then there exist
some $\alpha>0$ and $\beta>0$ such that
$$
\lambda\left(\left|Q\left(\frac{x_1}{|x|},\dots,
\frac{x_k}{|x|}\right)\right|>\alpha\right)>\beta,
$$
where $|x|^2=\sum\limits_{j=1}^kx_j^2$, and $\lambda$ denotes the
Lebesgue measure on the $k$-dimensional unit sphere $S_{k-1}$.
Exploiting that $|\xi|$ and $\frac\xi{|\xi|}$ are independent,
$\frac\xi{|\xi|}$ is uniformly distributed on the unit sphere
$S_{k-1}$, and $P(|\xi|>x)\ge ce^{-x^2}$ for a $k$-dimensional
standard normal random variable, we obtain that
$$
P(|Q(\xi_1,\dots,\xi_k)|>x)\ge\beta P\left(|\xi|^m>
\frac x\alpha\right)>e^{-Kx^{2/m}},
$$
if the constants $K$ and $x$ are sufficiently large. Theorem~6.6 is
proved. \hfill$\qed$
\medskip
Theorem~6.6 implies in particular that Wiener--It\^o integrals of
different multiplicity have different distributions. A bounded
random variable measurable with respect to the $\sigma$-algebra
generated by a stationary Gaussian field can be expressed as a sum of
multiple Wiener--It\^o integrals. Another consequence of Theorem~6.6
is the fact that the number of terms in this sum must be infinite.
In Theorems~6.2 and~$6.2'$ we have defined a large class of
self-similar fields. The question arises whether this class contains
self-similar fields such that the distributions of their random
variables tend to one (or zero) at infinity (at minus infinity)
much faster than the normal distribution functions do. This
question has been unsolved by now. By Theorem~6.6 such fields, if
any, must be expressed as a sum of infinitely many Wiener--It\^o
integrals. The above question is of much greater importance
than it may seem at first instant. Some considerations suggest
that in some important models of statistical physics self-similar
fields with very fast decreasing tail distributions appear as
limit, when the so-called renormalization group transformations are
applied for the probability measure describing the state of the
model at critical temperature. (The renormalization group
transformations are the transformations over the distribution of
stationary fields induced by formula~(\ref{(1.1)}) or~(\ref{(1.3)}),
when $A_N=N^\alpha$, $A(t)=t^\alpha$ with some~$\alpha$.) No
rigorous proof about the existence of such self-similar fields is
known yet. Thus the real problem behind the above question is whether
the self-similar fields interesting for statistical physics can be
constructed via multiple Wiener--It\^o integrals.
\chapter{On the original Wiener--It\^o integral}
In this chapter the definition of the original Wiener--It\^o
integral introduced by It\^o in~\cite{r18} is explained. As the
arguments are very similar to those of Chapters~4 and~5 (only the
notations become simpler) most proofs will be omitted.
Let a measure space $(M,{\cal M},\mu)$ with a $\sigma$-finite
measure~$\mu$ be given. Let $\mu$ satisfy the following continuity
property: For all $\varepsilon>0$ and $A\in{\cal M}$,
$\mu(A)<\infty$, there exist some disjoint sets $B_j\in{\cal M}$,
$j=1,\dots,N$, with some integer~$N$ such that
$\mu(B_j)<\varepsilon$ for all $1\le j\le N$, and
$A=\bigcup\limits_{j=1}^NB_j$. We introduce the following definition.
\medskip\noindent
{\bf Definition of (Gaussian) random orthogonal measures.}
\index{Gaussian random orthogonal measure}
{\it A system of random variables $Z_\mu(A)$, $A\in{\cal M}$,
$\mu(A)<\infty$, is called a Gaussian random orthogonal measure
corresponding to the measure~$\mu$ if
\medskip
\begin{description}
\item[(i)] $Z_\mu(A_1),\dots,Z_\mu(A_k)$ are independent Gaussian
random variables if the sets $A_j\in{\cal M}$, $\mu(A_j)<\infty$,
$j=1,\dots,k$, are disjoint.
\item[(ii)] $EZ_\mu(A)=0$, $EZ_\mu(A)^2=\mu(A)$.
\item[(iii)] $Z_\mu\left(\bigcup\limits_{j=1}^k A_j\right)
=\sum\limits_{j=1}^k Z_\mu(A_k)$
with probability~1 if $A_1,\dots,A_k$ are disjoint sets.
\end{description}
}
\medskip\noindent
{\it Remark.}\/ There is the following equivalent version for the
definition of random orthogonal measures: The system of random variables
system of random variables $Z_\mu(A)$, $A\in{\cal M}$, $\mu(A)<\infty$, is
a Gaussian random orthogonal measure corresponding to the
measure~$\mu$ if
\medskip
\begin{description}
\item[(i$'$)] $Z_\mu(A_1),\dots,Z_\mu(A_k)$ are (jointly) Gaussian
random variables for all sets $A_j\in{\cal M}$, $\mu(A_j)<\infty$,
$j=1,\dots,k$.
\item[(ii$'$)] $EZ_\mu(A)=0$, and $EZ_\mu(A)Z_\mu(B)=\mu(A\cap B)$
if $A,\,B\in{\cal M}$, $\mu(A)<\infty$, $\mu(B)<\infty$.
\end{description}
\medskip
It is not difficult to see that properties~(i),~(ii) and~(iii)
imply relations~(i$'$) and~(ii$'$). On the other hand, it is
clear that (i$'$) and (ii$'$) imply~(i) and~(ii). To see that
they also imply relation~(iii) observe that under these
conditions
$$
E\left[Z_\mu\left(\bigcup\limits_{j=1}^k A_j\right)
-\sum\limits_{j=1}^k Z_\mu(A_k)\right]^2=0
$$
if $A_1,\dots,A_k$ are disjoint sets.
The second characterization of random orthogonal measures may
help to show that for any measure space $(M,{\cal M},\mu)$
with a $\sigma$-finite measure~$\mu$ there exists a Gaussian
random orthogonal measure corresponding to the measure~$\mu$.
The main point in checking this statement is the proof that
for any sets $A_1,\dots,A_k\in{\cal M}$, $\mu(A_j)<\infty$,
$1\le j\le k$, there exists a Gaussian random vector
$(Z_\mu(A_1),\dots,Z_\mu(A_k))$, $EZ_\mu(A_j)=0$, with
correlation $EZ_\mu(A_i)Z_\mu(A_j)=\mu(A_i\cap A_j)$ for
all $1\le i,j\le k$. To prove this we have to show that
the corresponding covariance matrix is really positive definite,
i.e. $\sum\limits_{i,j} c_i\bar c_j\mu(A_i\cap A_j)\ge0$ for
an arbitrary vector $(c_1,\dots,c_k)$. But this follows from
the observation $\sum\limits_{i,j} c_i\bar c_j\chi_{A_i\cap A_j}(x)
=\sum\limits_{i,j} c_i\bar c_j\chi_{A_i}(x)\overline{\chi_{ A_j}(x)}
=\left|\sum\limits_i c_i\chi_{A_i}(x)\right|^2\ge0$ for all
$x\in M$, if we integrate this inequality with respect to
the measure~$\mu$ in the space~$M$.
\medskip
We define the real Hilbert spaces $\bar{{\cal K}}^n_\mu$, $n=1,2,\dots$.
The space $\bar{{\cal K}}^n_\mu$ consists of the real-valued measurable
functions over
$(\underbrace{M\times\cdots\times M}_{n\textrm{ times}},\,
\underbrace{{\cal M}\times\cdots\times{\cal M}}_{n\textrm{ times}})$
such that
$$
\|f\|^2=\int|f(x_1,\dots,x_n)|^2\mu(\,dx_1)\dots\mu(\,dx_n)<\infty,
$$
and the last formula defines the norm in $\bar{{\cal K}}^n_\mu$.
Let ${\cal K}^n_\mu$ denote the subspace of $\bar{{\cal K}}^n_\mu$
consisting of the functions $f\in\bar{{\cal K}}^n_\mu$ such that
$$
f(x_1,\dots,x_n)=f(x_{\pi(1)},\dots,x_{\pi(n)})
\quad\textrm{for all }\pi\in\Pi_n.
$$
Let the spaces $\bar{{\cal K}}^0_\mu$ and ${\cal K}^0_\mu$ consist
of the real constants with the norm $\|c\|=|c|$. Finally we
define the Fock space $\textrm{Exp}\,{\cal K}_\mu$ which consists
of the sequences $f=(f_0,f_1,\dots)$, $f_n\in{\cal K}^n_\mu$,
$n=0,1,2,\dots$, such that
$$
\|f\|^2=\sum_{n=0}^\infty \frac1{n!} \|f_n\|^2<\infty.
$$
Given a random orthogonal measure $Z_\mu$ corresponding
to $\mu$, let us introduce the $\sigma$-algebra
${\cal F}=\sigma(Z_\mu(A)\colon\; A\in{\cal M},\,\mu(A)<\infty)$.
Let ${\cal K}$ denote the real Hilbert space of square
integrable random variables measurable with respect to
the $\sigma$-algebra~${\cal F}$. Let ${\cal K}_{\le n}$
denote the subspace that is the closure of the linear space
containing the polynomials of the random variables $Z_\mu(A)$
of order less than or equal to~$n$. Let ${\cal K}_n$ be the
orthogonal completion of ${\cal K}_{\le n-1}$ to
${\cal K}_{\le n}$. (The norm is defined as $\|\xi\|^2=E\xi^2$
in these Hilbert spaces.)
The multiple Wiener--It\^o integrals with respect to the random
orthogonal measure $Z_\mu$,
\index{Wiener--It\^o integrals with respect to a random orthogonal
measure}
to be defined below, give a unitary transformation from
$\textrm{Exp}\,{\cal K}_\mu$ to ${\cal K}$. We shall denote
these integrals by $\int'$ to distinguish them from the
Wiener--It\^o integrals defined in Chapter~4.
First we define the class of simple functions
$\hat{\bar{{\cal K}}}_\mu^n\subset\bar{{\cal K}}_\mu^n$.
A function $f\in\bar{{\cal K}}_\mu^n$ is in
$\hat{\bar{{\cal K}}}_\mu^n$ if there exists a finite
system of disjoint sets $\Delta_1,\dots,\Delta_N$,
with $\Delta_j\in{\cal M}$, $\mu(\Delta_j)<\infty$, \
$j=1,\dots,N$, such that $f(x_1,\dots,x_n)$ is constant
on the sets $\Delta_{j_1}\times\cdots\times\Delta_{j_n}$
if the indices $j_1,\dots,j_n$ are disjoint, and
$f(x_1,\dots,x_n)$ equals zero outside these sets. We define
\index{simple function}
$$
\int' f(x_1,\dots,x_n)Z_\mu(\,dx_1)\dots Z_\mu(\,dx_n)
=\sum f(x_{j_1},\dots,x_{j_n})Z_\mu(\Delta_{j_1})
\cdots Z_\mu(\Delta_{j_n})
$$
for $f\in\hat{\bar{{\cal K}}}_\mu^n$, where $x_k\in\Delta_k$,
$k=1,\dots,N$.
Let $\hat{{\cal K}}_\mu^n=\hat{\bar{{\cal K}}}_\mu^n\cap{\cal K}_\mu^n$.
The random variables
$$
I'_\mu(f)=\frac1{n!}\int'f(x_1,\dots,x_n)
Z_\mu(\,dx_1)\dots Z_\mu(\,dx_n),
\quad f\in\hat{\bar{{\cal K}}}_\mu^n,
$$
have zero expectation, integrals of different order are orthogonal,
$$
I'_\mu(f)=I'_\mu(\,\textrm{Sym}\, f), \quad\textrm{and }
\textrm{Sym}\,f\in \hat{{\cal K}}_\mu^n \textrm{ if }
f\in\hat{\bar{{\cal K}}}_\mu^n,
$$
\begin{equation}
EI'_\mu(f)^2\le \frac1{n!}\|f\|^2 \quad \textrm{if }
f\in\hat{\bar{{\cal K}}}_\mu^n, \label{(7.1)}
\end{equation}
and~(\ref{(7.1)}) holds with equality if
$f\in\hat{{\cal K}}_\mu^n$.
It can be seen that $\hat{\bar{{\cal K}}}^n_\mu$ is dense
in~$\bar{{\cal K}}_\mu^n$ in the $L_2(\mu^n)$ norm. (This
is a statement analogous to Lemma~4.1, but its proof is
simpler. Hence relation~(\ref{(7.1)}) enables us to
extend the definition of the $n$-fold Wiener--It\^o
integrals over~$\bar{{\cal K}}_\mu^n$. All the above
mentioned relations remain valid if
$f\in\hat{\bar{{\cal K}}}_\mu^n$ is substituted by
$f\in\bar{{\cal K}}_\mu^n$, and
$f\in\hat{{\cal K}}_\mu^n$ is substituted by
$f\in{\cal K}_\mu^n$. We formulate It\^o's formula
for these integrals. It can be proved similarly to
Theorem~4.3 with the help of the diagram formula.
%valid for the Wiener--It\^o integrals discussed
%in this chapter.
\medskip\noindent
{\bf Theorem~7.1. (It\^o's formula.)}
\index{It\^o's formula for Wiener--It\^o integrals with respect
to a random orthogonal measure}
{\it Let
$\varphi_1,\dots,\varphi_m$, $\varphi_j\in{\cal K}^1_\mu$ for all
$1\le j\le m$, be an orthonormal system in $L^2_\mu$. Let some
positive integers $j_1,\dots,j_m$ be given, put $j_1+\cdots+j_m=N$,
and define for all $i=1,\dots,N$
$$
g_i=\varphi_1 \textrm{ for } 1\le i\le j_1,\quad\textrm{and \ }
g_i=\varphi_s \quad\textrm{for } j_1+\cdots+j_{s-1}**0$, is
said to be a slowly varying function (at infinity) if
$$
\lim_{t\to\infty}\frac{L(st)}{L(t)}=1
\quad\textrm{for all \ } s>0.
$$
}
\medskip
We shall apply the following description of slowly
varying functions.
\medskip\noindent
{\bf Theorem 8A. (Karamata's theorem.)}
\index{Karamata's theorem}
{\it If a slowly varying function $L(t)$ is bounded on every
finite interval, then it can be represented in the form
$$
L(t)=a(t)\exp\left\{\int_{t_0}^t
\frac{\varepsilon(s)}s\,ds\right\},
$$
where $a(t)\to a_0\neq0$, and $\varepsilon(t)\to0$ as
$t\to\infty$, and the functions $a(\cdot)$ and
$\varepsilon(\cdot)$ are bounded in every finite interval.}
\medskip
Let $X_n$, $n\in\,\textrm{\BBB Z}_\nu$, be a stationary
Gaussian field with expectation zero and a
correlation function
\begin{equation}
r(n)=EX_0X_n=|n|^{-\alpha}a\left(\frac n{|n|}\right)L(|n|),
\quad n\in\textrm{\BBB Z}_\nu, \label{(8.1)}
\end{equation}
where $0<\alpha<\nu$, $L(t)$ is a slowly varying function,
bounded in all finite intervals, and $a(t)$ is a
continuous function on the unit sphere ${\cal S}_{\nu-1}$,
satisfying the symmetry property $a(x)=a(-x)$ for all
$x\in{\cal S}_{\nu-1}$. Let $G$ denote the spectral
measure of the field~$X_n$, and let us define the
measures~$G_N$, $N=1,2,\dots$, by the formula
\begin{equation}
G_N(A)=\frac{N^\alpha}{L(N)}G\left(\frac AN\right),
\quad A\in{\cal B}^\nu, \quad N=1,2,\dots. \label{(8.2)}
\end{equation}
Now we recall the definition of vague convergence of
not necessarily finite measures on a Euclidean space.
\medskip\noindent
{\bf Definition of vague convergence of measures.}
\index{vague convergence of measures}
{\it Let $G_n$, $n=1,2,\dots$, be a sequence of locally
finite measures over $R^\nu$, i.e. let $G_n(A)<\infty$
for all measurable bounded sets~$A$. We say that the
sequence $G_n$ vaguely converges to a locally finite
measure~$G_0$ (in notation $G_n\stackrel{v}{\rightarrow} G_0$) if
$$
\lim_{n\to\infty}\int f(x)\,G_n(\,dx)=\int f(x)\,G_0(\,dx)
$$
for all continuous functions~$f$ with a bounded support.}
\medskip
We formulate the following
\medskip\noindent
{\bf Lemma 8.1.} {\it Let $G$ be the spectral measure of
a stationary random field with a correlation function
$r(n)$ of the form~(\ref{(8.1)}). Then the sequence of
measures~$G_N$ defined in~(\ref{(8.2)}) tends vaguely to
a locally finite measure~$G_0$. The measure~$G_0$ has
the homogeneity property
\begin{equation}
G_0(A)=t^{-\alpha}G_0(tA) \quad \textrm{for all } A\in{\cal B}^\nu
\quad\textrm{and } t>0, \label{(8.3)}
\end{equation}
and it satisfies the identity
\begin{eqnarray}
&&2^\nu\int e^{i(t,x)}
\prod_{j=1}^\nu\frac{1-\cos x^{(j)}}{(x^{(j)})^2}
\,G_0(\,dx) \label{(8.4)} \\
&&\qquad =\int_{[-1,1]^\nu} (1-|x^{(1)}|)\cdots (1-|x^{(\nu)}|)
\frac{a\left(\frac{x+t}{|x+t|}\right)}{|x+t|^\alpha}\,dx,
\quad\textrm{for all } t\in R^\nu. \nonumber
\end{eqnarray}
}
\medskip
We postpone the proof of Lemma~8.1 for a while.
Formulae~(\ref{(8.3)}) and~(\ref{(8.4)}) imply that the
function~$a(t)$ and the number~$\alpha$ in the
definition~(\ref{(8.1)}) of a correlation function~$r(n)$
uniquely determine the measure~$G_0$. Indeed, by
formula~(\ref{(8.4)}) they determine the (finite) measure
$\prod\limits_{j=1}^\nu\frac{1-\cos x^{(j)}}{(x^{(j)})^2}G_0(\,dx)$,
since they determine its Fourier transform. Hence they also
determine the measure~$G_0$. (Formula~(\ref{(8.3)}) shows
that this is a locally finite measure). Let us also remark
that since $G_N(A)=G_N(-A)$ for all $N=1,2,\dots$ and
$A\in {\cal B}^\nu$, the relation $G_0(A)=G_0(-A)$,
$A\in{\cal B}^\nu$ also holds. These properties of the
measure~$G_0$ imply that it can be considered as the
spectral measure of a generalized random field. Now we
formulate
\medskip\noindent
{\bf Theorem 8.2.} {\it Let $X_n$, $n\in\textrm{\BBB Z}_\nu$, be a
stationary Gaussian field with a correlation function $r(n)$
satisfying relation~(\ref{(8.1)}). Let us define the stationary random
field $\xi_j=H_k(X_j)$, $j\in\textrm{\BBB Z}_\nu$, with some positive
integer~$k$, where $H_k(x)$ denotes the $k$-th Hermite polynomial
with leading coefficient~1, and assume that the parameter~$\alpha$
appearing in~(\ref{(8.1)}) satisfies the relation $0<\alpha<\frac\nu k$.
If the random fields $Z^N_n$, $N=1,2,\dots$, $n\in\textrm{\BBB Z}_\nu$,
are defined by formula~(\ref{(1.1)}) with
$A_N=N^{\nu-k\alpha/2}L(N)^{k/2}$ and the above defined
$\xi_j=H_k(X_j)$, then their multi-dimensional
distributions tend to those of the random field~$Z^*_n$,
$$
Z^*_n=\int \tilde\chi_n(x_1+\cdots+x_k)\,
Z_{G_0}(\,dx_1)\dots Z_{G_0}(\,dx_k), \quad n\in\textrm{\BBB Z}_\nu.
$$
Here $Z_{G_0}$ is a random spectral measure corresponding to the
spectral measure $G_0$ which appeared in Lemma~8.1. The function
$\tilde\chi_n(\cdot)$, $n=(n^{(1)},\dots,n^{(\nu)})$, is (similarly
to Chapter~6) the Fourier transform of the indicator function of
the $\nu$-dimensional unit cube
$\prod\limits_{p=1}^\nu[n^{(p)},n^{(p)}+1]$.}
\medskip\noindent
{\it Remark.}\/ The condition that the correlation function~$r(n)$
of the random field $X_n$, $n\in\textrm{\BBB Z}_\nu$, satisfies
formula~(\ref{(8.1)}) can be weakened. Theorem~8.2 and Lemma~8.1 remain
valid if~(\ref{(8.1)}) is replaced by the slightly weaker condition
$$
\lim_{T\to\infty}\sup_{n\colon\;n\in\textrm{\BBB Z}_\nu,\,|n|\ge T}
\frac{r(n)}{|n|^{-\alpha}a\left(\frac n{|n|}\right)L(|n|)}=1,
$$
where $0<\alpha<\nu$, $L(t)$ is a slowly varying function, bounded
in all finite intervals, and $a(t)$ is a continuous function on the
unit sphere ${\cal S}_{\nu-1}$, satisfying the symmetry property
$a(x)=a(-x)$ for all $x\in{\cal S}_{\nu-1}$.
\medskip
First we explain why the choice of the normalizing constant~$A_N$ in
Theorem~8.2 was natural, then we explain the ideas of the proof,
finally we work out the details.
It can be shown, for instance with the help of Corollary~5.5, that
$EH_k(\xi)H_k(\eta)=E\colon\!\xi^k\!\colon\colon\!\eta^k\!\colon
= k!(E\xi\eta)^k$ for a Gaussian random vector
$(\xi,\eta)$ with $E\xi=E\eta=0$ and $E\xi^2=E\eta^2=1$. Hence
$$
E(Z_n^N)^2=\frac{k!}{A_N^2}\sum_{j,\,l\in B_0^N}r(j-l)^k
\sim\frac{k!}{A_N^2}
\sum_{j,\,l\in B_0^N}|j-l|^{-k\alpha}a^k
\left(\frac{j-l}{|j-l|}\right)L(|j-l|)^k,
$$
with the set $B_0^N$ introduced after formula~(\ref{(1.1)}). Some
calculation with the help of the above formula shows that with
our choice of~$A_N$ the expectation $E(Z_n^N)^2$ is separated
both from zero and infinity, therefore this is the natural
norming factor. In this calculation we have to exploit the
condition $k\alpha<\nu$, which implies that in the sum
expressing $E(Z_n^N)^2$ those terms are dominant for which
$j-l$ is relatively large, more explicitly which are of
order~$N$. There are $\textrm{const.}\, N^{2\nu}$ such terms.
The field $\xi_n$ is subordinated to the Gaussian field~$X_n$.
It is natural to rewrite it in canonical form, and to express
$Z_n^N$ via multiple Wiener--It\^o integrals. It\^o's formula
yields the relation
$$
\xi_n=H_k\left(\int e^{i(n,x)}Z_G(\,dx)\right)
=\int e^{i(n,x_1+\cdots+x_k)}Z_G(\,dx_1)\dots Z_G(\,dx_k),
$$
where $Z_G$ is the random spectral measure adapted to the random
field~$X_n$. Then
\begin{eqnarray*}
Z_n^N&=&\frac1{A_N}\sum_{j\in B_n^N}\int e^{i(j,x_1+\cdots+x_k)}
Z_G(\,dx_1)\dots Z_G(\,dx_k)\\
&=&\frac1{A_N}\int e^{i(Nn,x_1+\cdots+x_k)}\prod_{j=1}^\nu
\frac{e^{iN(x_1^{(j)}+\cdots+x_k^{(j)})}-1}
{e^{i(x_1^{(j)}+\cdots+x_k^{(j)})}-1}\,
Z_G(\,dx_1)\dots Z_G(dx_k).
\end{eqnarray*}
Let us make the substitution $y_j=Nx_j$, $j=1,\dots,k$, in the last
formula, and let us rewrite it in a form resembling
formula~(\ref{($6.5'$)}).
To this end, let us introduce the measures~$G_N$ defined
in~(\ref{(8.2)}). By Lemma~4.6 we can write
$$
Z_n^N\stackrel{\Delta}{=}\int f_N(y_1,\dots,y_k)
\tilde\chi_n(y_1+\cdots+y_k)\,
Z_{G_N}(\,dy_1)\dots Z_{G_N}(dy_k)
$$
with
\begin{equation}
f_N(y_1,\dots,y_k)=\prod_{j=1}^\nu
\frac{i(y_1^{(j)}+\cdots+y_k^{(j)})}
{\left(\exp\left\{i\frac1N(y_1^{(j)}
+\cdots+y_k^{(j)})\right\}-1\right)N},
\label{(8.5)}
\end{equation}
where $\tilde\chi_n(\cdot)$ is the Fourier transform of the
indicator function of the unit cube
$\prod\limits_{j=1}^\nu[n^{(j)},n^{(j)}+1)$.
(It follows from Lemma~8B formulated below and the Fubini
theorem that the set, where the denominator of the function
$f_N$ disappears, i.e. the set where
$y_1^{(j)}+\cdots+y_k^{(j)}=2lN\pi$ with some integer
$l\neq0$ and $1\le j\le\nu$ has 0 $G_N\times\cdots\times G_N$
measure. This means that the functions $f_N$ are well defined.)
The functions~$f_N$ tend to 1 uniformly in all bounded regions,
and the measures~$G_N$ tend vaguely to~$G_0$ as $N\to\infty$
by Lemma~8.1. These relations suggest the following limiting
procedure. The limit of $Z_n^N$ can be obtained by
substituting $f_N$ with~1 and $G_N$ with~$G_0$ in the
Wiener--It\^o integral expressing~$Z_n^N$. We want to justify
this formal limiting procedure. For this we have to show that
the Wiener--It\^o integral expressing~$Z_n^N$ is essentially
concentrated in a large bounded region independent of ~$N$.
The $L_2$~isomorphism of Wiener--It\^o integrals can help us
in showing that. The next result formulated in Lemma~8.3 is
a useful tool for the justification of the above limiting
procedure.
Before formulating this lemma we make a small digression. It
was explained that Wiener--It\^o integrals can be defined
also with respect to random stationary fields $Z_G$ adapted
to a stationary Gaussian random field whose spectral
measure~$G$ may have atoms, and we can work with them
similarly as in the case of non-atomic spectral measures.
Here a lemma will be proved which shows that in the proof
of Theorem~8.2 we do not need this observation, because
if the correlation function of the random field
satisfies~(\ref{(8.1)}), then its spectral measure is
non-atomic.
\medskip\noindent
{\bf Lemma~8B.} {\it Let the correlation function of a
stationary random field $X_n$, $n\in\textrm{\BBB Z}_\nu$,
satisfy the relation $r(n)\le A|n|^{-\alpha}$ with some
$A>0$ and $\alpha>0$ for all $n\in\textrm{\BBB Z}_\nu$,
$n\neq0$. Then its spectral measure $G$ is non-atomic.
Moreover, all hyperplanes $\sum\limits_{j=1}^\nu c_jx^{(j)}=d$
defined with some constants $c_j$ and $d$ have zero $G$ measure.}
\medskip\noindent
{\it Proof of Lemma 8B.} Lemma 8B clearly holds if $\alpha>\nu$,
because in this case the spectral measure~$G$ has even a density
function $g(x)=\sum\limits_{n\in\textrm{\BBB Z}_\nu}e^{-i(n,x)}r(n)$.
On the other hand, the $p$-fold convolution of the spectral measure
$G$ with itself (on the torus $R^\nu/2\pi\textrm{\BBB Z}_\nu$) has
Fourier transform, $r(n)^p$, $n\in{\BBB Z}^\nu$ hence in the case
$p>\frac\nu\alpha$ this function is non-atomic. Hence it is enough
to show that if the convolution $G*G$ is a non-atomic measure,
then so is the measure~$G$. But this is obvious, because if
there were a point $x\in R^\nu/2\pi\textrm{\BBB Z}_\nu$ such
that $G(\{x\})>0$, then $G*G(\{x+x\})>0$ would hold, and this
is a contradiction. (Here addition is taken on the torus.) It
can be proved similarly that all hyperplanes have zero $G$ measure.
\hfill$\qed$
\medskip
Now we formulate the following result.
\medskip\noindent
{\bf Lemma~8.3.} {\it Let $G_N$, $N=1,2,\dots$, be a sequence
of non-atomic spectral measures on $R^\nu$ tending vaguely
to a non-atomic spectral measure~$G_0$. Let a sequence of
measurable functions $K_N=K_N(x_1,\dots,x_k)$,
$N=0,1,2,\dots$, be given such that
$K_N\in\bar{{\cal H}}_{G_N}^k$ for $N=1,2,\dots$. Assume
further that the following properties hold: For all
$\varepsilon>0$ there exist some constants
$A=A(\varepsilon)>0$ and $N_0=N_0(\varepsilon)>0$ and
finitely many rectangles $P_1,\dots,P_M$ with some
cardinality $M=M(\varepsilon)$ on $R^{k\nu}$ which satisfy
the following conditions~(a) and~(b) formulated below. (We
call a set $P\in{\cal B}^{k\nu}$ a rectangle if it can be
written in the form $P=L_1\times\cdots\times L_k$ with
some bounded open sets $L_s\in{\cal B}^\nu$, $1\le s\le k$,
with boundaries $\partial L_s$ of zero $G_0$~measure, i.e.\
$G_0(\partial L_s)=0$ for all $1\le s\le k$.)
\medskip
\begin{description}
\item[(a)] The function $K_0$ is continuous on the set
$B=[-A,A]^{k\nu}\setminus\bigcup\limits_{j=1}^MP_j$, and $K_N\to K_0$
uniformly on the set $B$ as $N\to\infty$. Besides, the hyperplanes
$x_p=\pm A$ have zero $G_0$~measure for all $1\le p\le\nu$.
\item[(b)] $\int_{R^{k\nu}\setminus B}|K_N(x_1,\dots,x_k)|^2
G_N(\,dx_1)\dots G_N(dx_k)<\frac{\varepsilon^3}{k!}$
if $N=0$ or $N\ge N_0$, and
$K_0(-x_1,\dots,-x_k)=\overline{K_0(x_1,\dots,x_k)}$ for all
$(x_1,\dots,x_k)\in R^{k\nu}$.
\end{description}
\medskip
Then $K_0\in\bar{{\cal H}}_{G_0}^k$, and
$$
\int K_N(x_1,\dots,x_k)\,Z_{G_N}(\,dx_1)\dots Z_{G_N}(\,dx_k)
\stackrel{{\cal D}}{\rightarrow}
\int K_0(x_1,\dots,x_k)\,Z_{G_0}(\,dx_1)\dots Z_{G_0}(\,dx_k)
$$
%\begin{eqnarray*}
%&&\int K_N(x_1,\dots,x_k)\,Z_{G_N}(\,dx_1)\dots Z_{G_N}(\,dx_k) \\
%&&\qquad \stackrel{{\cal D}}{\rightarrow}
%\int K_0(x_1,\dots,x_k)\,Z_{G_0}(\,dx_1)\dots Z_{G_0}(\,dx_k)
%\end{eqnarray*}
as $N\to\infty$, where $\stackrel{{\cal D}}{\rightarrow}$ denotes
convergence in distribution.}
\medskip\noindent
{\it Remark.}\/ In the proof of Theorem~8.2 or of its generalization
Theorem~$8.2'$ formulated later a simpler version of Lemma~8.3 with a
simpler proof would suffice. We could work with such a version where
the rectangles~$P_j$ do not appear. We formulated this somewhat more
complicated result, because it can be applied in the proof of more
general theorems, where the limit is given by such a Wiener--It\^o
integral whose kernel function may have discontinuities. Thus it
seemed to be better to present such a result even if its proof is
more complicated. The proof applies some arguments of Lemma~4.1.
To work out the details it seemed to be useful to introduce some
metric in the space of probability measures which metricizes weak
convergence. Although it may look a bit too technical, it made
possible to carry out some arguments in a natural way.
\medskip\noindent
{\it Proof of Lemma~8.3.}\/
Conditions~(a) and~(b) obviously imply that
$$
\int|K_0(x_1,\dots,x_k)|^2\,G_0(\,dx_1)\dots G_0(\,dx_k)<\infty,
$$
hence $K_0\in\bar{{\cal H}}_{G_0}^k$. Let us fix an
$\varepsilon>0$, and let us choose some $A>0$, $N_0>0$ and
rectangles $P_1,\dots,P_M$ which satisfy conditions~(a)
and~(b) with this~$\varepsilon$. Then
\begin{eqnarray}
&&E\left[\int [1-\chi_B(x_1,\dots,x_k)]K_N(x_1,\dots,x_k)\,
Z_{G_N}(\,dx_1)\dots Z_{G_N}(\,dx_k)\right]^2 \nonumber \\
&&\qquad \le k!\int_{R^{k\nu}\setminus B}|K_N(x_1,\dots,x_k)|^2
G_N(\,dx_1)\dots G_N(\,dx_k)<\varepsilon^3
\label{(8.6)}
\end{eqnarray}
for $N=0$ or $N>N_0$, where $\chi_B$ denotes the indicator function
of the set~$B$ introduced in the formulation of condition~(a).
Since $B\subset [-A,A]^{k\nu}$, and $G_N\stackrel{v}{\rightarrow} G_0$,
hence $G_N\times\cdots\times G_N(B)N_1$ with some $N_1=N_1(A,\varepsilon)$.
First we shall reduce the proof of Lemma~8.3 to the proof
of the relation
\begin{eqnarray}
&&\int K_0(x_1,\dots,x_k)\chi_B(x_1,\dots,x_k)\,
Z_{G_N}(\,dx_1)\dots Z_{G_N}(\,dx_k) \nonumber \\
&&\qquad \stackrel{{\cal D}}{\rightarrow}
\int K_0(x_1,\dots,x_k)\chi_B(x_1,\dots,x_k)\,
Z_{G_0}(\,dx_1)\dots Z_{G_0}(\,dx_k).
\label{(8.8)}
\end{eqnarray}
with the help of formulas~(\ref{(8.6)}) and~(\ref{(8.7)}),
and then we shall prove~(\ref{(8.8)}). It is simpler
to carry out this reduction with the help of some metric on
the space of probability measure which induces weak
convergence in this space. Hence I recall some classical
notions and results about convergence of probability
measures on a metric space which will be useful in our
considerations.
\medskip\noindent
{\bf Definition of Prokhorov metric, and its properties.}
{\it Given a separable metric space $(X,\rho)$ with some
metric~$\rho$ let ${\cal S}$ denote the space of probability
measures on it. The Prokhorov metric $\rho_P$ is the
metric in the space ${\cal S}$ defined by the formula
$\rho_P(\mu,\nu)=\inf\{\varepsilon\colon\;
\mu(A)\le\nu(A^\varepsilon)+\varepsilon\textrm{ for all }
A\in{\cal A}\}$ for two probability measures
$\mu,\nu\in{\cal S}$, where
$A^\varepsilon=\{x\colon\;\rho(x,A)<\varepsilon\}$.
The above defined $\rho_P$ is really a metric on~${\cal S}$
(in particular, $\rho_P(\mu,\nu)=\rho_P(\nu,\mu)$)
which metricizes the weak convergence of probability
measures in the metric space~$(X,\rho)$, i.e.
$\mu_N\stackrel{w}{\rightarrow}\mu_0$ for a sequence of
probability measures $N=0,1,2,\dots$ if and only if
$\lim\limits_{N\to\infty}\rho_P(\mu_N,\mu_0)=0$.}
\index{Prokhorov metric in the space of probability measures}
\medskip
The results formulated in this definition can be found e.g. in
R.M.~Dudley Distances of probability measures and random variables.
Ann.~Math.~Statist.~39, 1563--1572 (1968)). Let us also recall the
definition of weak converges of probability measures on a metric
space.
\medskip\noindent
{\bf Definition of weak convergence of probability measures on a
metric space.} {\it A sequence of probability measures $\mu_n$,
$n=1,2,\dots$, on a metric space $(X,\rho)$ converges weakly
to a probability measure $\mu$ on this space, (in notation
$\mu_n\stackrel{w}{\rightarrow}\mu$) if
$\lim\limits_{n\to\infty}\int f(x)\mu_n(\,dx)\to\int f(x)\mu(\,dx)$
for all continuous and bounded functions on the space $(X,\rho)$.}
\index{weak convergence of probability measures}
\medskip
I formulated the above result for probability measures
in a general metric space, but I shall work on the real
line. Given a random variable~$\xi$ let $\mu(\xi)$ denote
its distribution. Let us remark that the convergence
$\xi_N\stackrel{{\cal D}}{\rightarrow}\xi_0$ as $N\to\infty$
of a sequence of random variables, $\xi_0,\xi_1,\xi_2,\dots$
is equivalent to the statement
$\mu(\xi_N)\stackrel{w}{\rightarrow}\mu(\xi_0)$ or
$\rho_P(\mu(\xi_N),\mu(\xi_0))\to0$ as $N\to\infty$.
Hence by putting
$\xi_N=k!I_{G_N}(K_N(x_1,\dots,x_k))$, $N=0,1,2,\dots$
we can reformulate the statement of Lemma~8.3 in the
following way. For all $\varepsilon>0$ there exists
some index $N'_0=N'_0(\varepsilon)$ such that
$\rho_P(\mu(\xi_N),\mu(\xi_0))\le4\varepsilon$
for all~$N\ge N'_0$.
To prove the reduction of Lemma~8.3 to formula~(\ref{(8.8)})
let us first show that for three random variables $\xi$,
$\bar\xi$ and~$\eta$ such that
$P(|\eta|\ge\varepsilon)\le\varepsilon$ the inequality
\begin{equation}
\rho_P(\mu(\xi+\eta),\mu(\bar\xi))
\le\rho_P(\mu(\xi),\mu(\bar\xi))+\varepsilon \label{(8.9)}
\end{equation}
holds.
Indeed, since $\{\omega\colon\;\xi(\omega)+\eta(\omega)\in A\}
\subset\{\omega\colon\;\xi(\omega)
\in A^\varepsilon\}\cup\{\omega\colon\;|\eta(\omega)|
\ge\varepsilon\}$,
we have
$P(\xi+\eta\in A)\le P(\xi\in A^\varepsilon)+\varepsilon$
for any
set~$A\in{\cal B}_1$ if $P(|\eta|\ge\varepsilon)\le\varepsilon$.
Besides, $P(\xi\in A^\varepsilon)
\le P(\bar\xi\in A^{\varepsilon+\delta})+\delta$ for all
$\delta>\rho_P(\mu(\xi),\mu(\bar\xi))$. Hence
$P(\xi+\eta\in A)\le
P(\bar\xi\in A^{\varepsilon+\delta})+\varepsilon+\delta$ for all
$A\in{\cal B}_1$ and $\delta>\rho_P(\mu(\xi),\mu(\bar\xi))$,
i.e.\ $\rho_P(\mu(\xi+\eta),\mu(\bar\xi))\le\varepsilon+\delta$,
and this implies the inequality
$\rho_P(\mu(\xi+\eta),\mu(\bar\xi))
\le\rho_P(\mu(\xi),\mu(\bar\xi))+\varepsilon$.
Put
\begin{eqnarray*}
\xi_N^{(1)}&=&k!I_{G_N}(K_0(x_1,\dots,x_k)\chi_B(x_1,\dots,x_k)),\\
\xi_N^{(2)}&=&k!I_{G_N}(K_N(x_1,\dots,x_k)-K_0(x_1,\dots,x_k))
\chi_B(x_1,\dots,x_k)),\\
\xi_N^{(3)}&=&k!I_{G_N}(1-\chi_B(x_1,\dots,x_k))K_N(x_1,\dots,x_k))
\end{eqnarray*}
for all $N=0,1,2,\dots$. With this notation it follows from
relation~(\ref{(8.8)}) and the fact that the Prokhorov metric
metricizes the weak convergence that
$$
\rho_P(\mu(\xi_N^{(1)}),\mu(\xi_0^{(1)}))\le\varepsilon
\quad \textrm{if } N\ge N'_1(\varepsilon)
$$
with some threshold index~$N'_1(\varepsilon)$.
Formulas~(\ref{(8.6)}) and~(\ref{(8.7)})
together with the Chebishev inequality imply that
$P(|\xi_N^{(2)}|\ge\varepsilon)\le\varepsilon$ and
$P(|\xi_N^{(3)}|\ge\varepsilon)\le\varepsilon$
if $N\ge N'_2(\varepsilon)$ or $N=0$ with some threshold
index~$N'_2(\varepsilon)$.
Besides, we have $\xi_0=\xi_0^{(1)}+\xi_0^{(3)}$ and
$\xi_N=\xi_N^{(1)}+\xi_N^{(2)}+\xi_N^{(3)}$ for $N=1,2,\dots$.
The above mentioned properties of the random variables we considered
together with relation~(\ref{(8.9)}) imply that
\begin{eqnarray*}
\rho_P(\mu(\xi_N),\mu(\xi_0))
&=&\rho_P(\mu(\xi_N^{(1)}+\xi_N^{(2)}+\xi_N^{(3)}),
\mu(\xi_0^{(1)}+\xi_0^{(3)}))\\
&\le&\rho_P(\mu(\xi_N^{(1)}+\xi_N^{(2)}+\xi_N^{(3)}),
\mu(\xi_0^{(1)}))+\varepsilon \\
&\le&\rho_P(\mu(\xi_N^{(1)}+\xi_N^{(2)}),\mu(\xi_0^{(1)}))
+2\varepsilon\\
&\le&\rho_P(\mu(\xi_N^{(1)}),\mu(\xi_0^{(1)}))+3\varepsilon
\le 4\varepsilon
\end{eqnarray*}
if $N\ge N'_0(\varepsilon)=\max(N'_1(\varepsilon),N'_2(\varepsilon))$.
Hence Lemma~8.3 follows from~(\ref{(8.8)}).
\medskip
To prove~(\ref{(8.8)}) we will show that
$K_0(x_1,\dots,x_k)\chi_B(x_1,\dots,x_k)$ can be well
approximated by simple functions from
$\hat{\bar{{\cal H}}}_{G_0}^k$ in the following sense.
For all $\varepsilon'>0$ there exists a simple function
$f_{\varepsilon'}\in\hat{\bar{{\cal H}}}^k_{G_0}$ such that
\begin{equation}
E\int( K_0(x_1,\dots,x_k)\chi_B(x_1,\dots,x_k)
-f_{\varepsilon'}(x_1,\dots,x_k))^2
G_0(\,dx_1)\dots G_0(\,dx_k)\le\frac{{\varepsilon'}^3}{k!}
\label{(8.10)}
\end{equation}
and also
\begin{equation}
E\int( K_0(x_1,\dots,x_k)\chi_B(x_1,\dots,x_k)
-f_{\varepsilon'}(x_1,\dots,x_k))^2
G_N(\,dx_1)\dots G_N(\,dx_k)\le\frac{{\varepsilon'}^3}{k!}
\label{(8.11)}
\end{equation}
if $N\ge N_0$ with some threshold index
$N_0=N_0(\varepsilon',K_0(\cdot)\chi_B(\cdot))$. Moreover,
this simple function $f_{\varepsilon'}$ is adapted to such
a regular system ${\cal D}=\{\Delta_j,\;j=\pm1,\dots,\pm M\}$
whose elements have boundaries with zero $G_0$ measure, i.e.
$G_0(\partial\Delta_j)=0$ for all $1\le |j|\le M$.
To prove~(\ref{(8.8)}) with the help of these estimates first
I show that this function
$f_{\varepsilon'}\in\hat{\bar{{\cal H}}}_{G_0}^k$
satisfies the relation
\begin{equation}
\int f_{\varepsilon'}(x_1,\dots,x_k)
\,Z_{G_N}(\,dx_1)\dots Z_{G_N}(\,dx_k)
\stackrel{{\cal D}}{\rightarrow}
\int f_{\varepsilon'}(x_1,\dots,x_k)
\,Z_{G_0}(\,dx_1)\dots Z_{G_0}(\,dx_k)
\label{(8.12)}
\end{equation}
%\begin{eqnarray}
%&&\int f_\varepsilon(x_1,\dots,x_k)
%\,Z_{G_N}(\,dx_1)\dots Z_{G_N}(\,dx_k)
%\nonumber \\
%&&\stackrel{{\cal D}}{\rightarrow}
%\int f_\varepsilon(x_1,\dots,x_k)
%\,Z_{G_0}(\,dx_1)\dots Z_{G_0}(\,dx_k)
%\label{(8.12)}
%\end{eqnarray}
as $N\to\infty$. To prove~(\ref{(8.12)}) observe that for the
regular system ${\cal D}=\{\Delta_j,\;j=\pm1,\dots,\pm M\}$
to which the function
$f_{\varepsilon'}\in\hat{\bar{{\cal H}}}_{G_0}^k$
is adapted has the property $G_0(\partial\Delta_j)=0$ for
all $j=\pm1,\dots,\pm M$. Besides, the spectral measures
$G_N$ are such that $G_N\stackrel{v}{\rightarrow}G_0$.
Hence the (Gaussian) random vectors
$(Z_{G_N}(\Delta_j),\;j=\pm1,\dots,\pm M)$ converge in
distribution to the (Gaussian) random vector
$(Z_{G_0}(\Delta_j),\;j=\pm1,\dots,\pm M)$ as $N\to\infty$.
The same can be told about the random variables we get by
putting the arguments of these random vectors to a
continuous function (of $M$ variables). Since the integrals
in~(\ref{(8.12)}) are polynomials of these random vectors,
we can apply these results for them, and they imply
relation~(\ref{(8.12)}).
Put
\begin{equation}
K_0(x_1,\dots,x_k)\chi_B(x_1,\dots,x_k)
-f_{\varepsilon'}(x_1,\dots,x_k)=h_0(x_1,\dots,x_k). \label{(8.13)}
\end{equation}
By relations~(\ref{(8.10)}), (\ref{(8.11)}) and the Chebishev
inequality
$P(|k!I_{G_0}(h_0)|\ge\varepsilon')\le\varepsilon'$
and $P(|k!I_{G_N}(h_0)\ge\varepsilon')\le\varepsilon'$
if $N\ge N_0$. Since
$I_{G_N}(K_N(x_1,\dots,x_k)\chi_B(x_1,\dots,x_k))=
I_{G_N}(f_{\varepsilon'}(x_1,\dots,x_k))+h_N(x_1,\dots,x_k)$,
$N=0,1,2,\dots$,
the above relations together with formulas~(\ref{(8.12)})
and~(\ref{(8.9)}) (with the number $\varepsilon'$ instead of
$\varepsilon$) imply that
\begin{eqnarray*}
&&\lim_{N\to\infty}\rho_P(\mu(k!I_{G_N}(K_0(\cdot)\chi_B(\cdot))),
\mu(k!I_{G_0}(K_0(\cdot)\chi_B(\cdot)))) \\
&&=\lim_{N\to\infty}
\rho_P(\mu(k!I_{G_N}(f_{\varepsilon'}(\cdot)+h_0(\cdot))),
\mu(k!I_{G_o}(f_{\varepsilon'}(\cdot)+h_0(\cdot)))) \\
&&\qquad \le\lim_{N\to\infty}\rho_P(\mu(k!I_{G_N}(h_0(\cdot))),
\mu(k!I_{G_0}(h_0(\cdot))))+2\varepsilon'=2\varepsilon'.
\end{eqnarray*}
Since this inequality holds for all $\varepsilon'>0$ this
implies relation~(\ref{(8.8)}). To complete the proof of
Lemma~8.3 we have to justify relations~(\ref{(8.10)})
and~(\ref{(8.11)}).
\medskip
Relation~(\ref{(8.10)}) is actually a version
of Lemma~4.1, but it states a slightly stronger
approximation result under the conditions of
Lemma~8.3. The statement that for all $\varepsilon'$
the function $K_0(\cdot)\chi_B(\cdot)$ can be
approximated with a simple function
$f_{\varepsilon'}(x_1,\dots,x_k)$ which
satisfies~(\ref{(8.10)}) agrees with Lemma~4.1.
But now we want to find a function
$f_{\varepsilon'}$ which is adapted to such a
regular system
${\cal D}=\{\Delta_j,\;j=\pm1,\dots,\pm M\}$ whose
elements have the additional property
$G_0(\partial \Delta_j)=0$ for all indices~$j$.
A function $f_{\varepsilon'}$ with these
properties can be constructed by means of a slight
modification of the proof of Lemma~4.1. But in the
present case we exploit that the function
$K_0(\cdot)\chi_B(\cdot)$ is almost everywhere
continuous with respect to the product measure
$G_0^k=\underbrace{G_0\times\cdots\times G_0}_{k\textrm{ times}}$.
This property is needed in the first step of the
construction, where we reduce the approximation result
we want to prove to a slightly modified version of
{\it Statement~A}. In this modified version we
claim the good approximability of the indicator
function of such sets $A$ which satisfy not only the
properties demanded in {\it Statement~A},\/ but also the
relations $G_0(\partial A)=0$ and $G_0(\partial A_1)=0$
hold. On the other hand, we demand the same property
$G_0(\partial B)=0$ about the set $B$ whose indicator
function is the approximating function in
{\it Statement A}. To carry out the
reduction, needed in this case we approximate the
function~$K_0(\cdot)\chi_B(\cdot)$ with such an
elementary function (a function taking finitely many
values) whose level sets have boundaries with zero
$G_0^k=G_0\times\dots\times G_0$ measure. This is possible,
since the boundaries of these level sets consist of such
points where either the function $K_0(\cdot)\chi_B(\cdot)$
takes the value from an appropriately chosen finite set, or
it is discontinuous.
To complete the reduction of our result to the new version
of {\it Statement~A}\/ we still have to show that if the
set $A$ can be written in the form $A=A_1\cup(-A_1)$
such that $A_1\cap(-A_1)=\emptyset$, and
$G_0^k(\partial A_1)=0$, then for all $\eta>0$ there is
some $\bar A_1=\bar A_1(\eta)\subset A_1$ such that
$G_0^k(A\setminus(\bar A_1\cup(-\bar A_1))\le\eta$,
$\rho(\bar A_1,-\bar A_1)>0$, and
$G_0^k(\partial \bar A_1)=0$. Indeed, there is a
compact set $K\subset A_1$ such that
$G^k(A_1\subset K)\le\frac\eta2$. Then also
the relation $\rho(K,-K)=\delta>0$ holds. By the
Heine--Borel theorem we can find an open set $G$
such that $K\subset G\subset K^{\delta/3}$ with
$K^{\delta/3}=\{x\colon \rho(x,K)<\frac\delta3\}$,
and $G_0^k(\partial G)=0$. Then the set
$\bar A_1=A_1\cap G$ satisfies the desired
properties.
After making the reduction of the result we want to
prove to this modified version of {\it Statement A}\/
we can follow the construction of
Lemma~4.1, but we choose in each step sets with zero
$G_0\times\cdots\times G_0$ boundary.
A more careful analysis shows that the function constructed
in such a way satisfies also~(\ref{(8.11)}) for $N\ge N_0$
with a sufficiently large threshold index~$N_0$. Here we
exploit that $G_N\stackrel{v}{\rightarrow}G_0$. This may
enable us to show that the estimates we need in the
construction hold not only with respect to the spectral
measure~$G_0$ but also with respect to the spectral
measures~$G_N$ with a sufficiently large index~$N$.
We can get another explanation of the estimate~(\ref{(8.11)})
by exploiting that the function $h_0(x_1,\dots,x_k)$ defined
in~(\ref{(8.13)}) is almost everywhere continuous with
respect to the measure $G_0\times\cdots\times G_0$. It can
be shown that the vague convergence has similar properties
as the weak convergence, hence the above mentioned almost
everywhere continuity implies that
$$
\lim_{N\to\infty}\int h_0(x_1,\dots,x_k)G_N(\,dx_1)\dots G_N(\,dx_k)
=\int h_0(x_1,\dots,x_k)G_0(\,dx_1)\dots G_0(\,dx_k).
$$
\hfill$\qed$
\medskip\noindent
{\it Remark.} We have formulated this statement in the case
when $G_N$ is a spectral measure on $R^\nu$. But it remains
valid if $G_N$ is a spectral measure on the torus of size
$2C_N\pi$ with $C_N\to\infty$ if $N\to\infty$ if we identify
this torus with the set
$[-C_N\pi,C_N\pi)^\nu\subset R^\nu$ in a natural way.
\medskip
Now we turn to the proof of Theorem~8.2.
\medskip\noindent
{\it Proof of Theorem~8.2.} We want to prove that for all
positive integers $p$, real numbers $c_1,\dots,c_p$ and
$n_l\in\textrm{\BBB Z}_\nu$, $l=1,\dots,p$,
$$
\sum_{l=1}^p c_l Z^N_{n_l}\stackrel{{\cal D}}{\rightarrow}
\sum_{l=1}^p c_lZ^*_{n_l},
$$
since this relation also implies the convergence of the
multi-dimensional distributions. Applying the same calculation
as before we get with the help of Lemma~4.6 that
$$
\sum_{l=1}^p c_lZ_{n_l}^N=\frac1{A_N}\sum_{l=1}^p c_l\int
\sum_{j\in B_{n_l}^N}e^{i(j,x_1+\cdots+x_k)}
\,Z_G(\,dx_1)\dots Z_G(\,dx_k),
$$
and
$$
\sum_{l=1}^p c_lZ^N_{n_l}\stackrel{\Delta}{=}
\int K_N(x_1,\dots,x_k)
\,Z_{G_N}(\,dx_1)\dots Z_{G_N}(\,dx_k)
$$
with
\begin{eqnarray}
K_N(x_1,\dots,x_k)&=& \frac1{N^\nu}\sum_{l=1}^p c_l\sum_{j\in B_{n_l}^N}
\exp\left\{i\left(\frac jN,x_1+\cdots+x_k\right)\right\} \nonumber \\
&=&f_N(x_1,\dots,x_k)\sum_{l=1}^p c_l\tilde\chi_{n_l}(x_1+\cdots+x_k).
\label{(8.14)}
\end{eqnarray}
with the function~$f_N$ defined in~(\ref{(8.5)}) and
the measure~$G_N$ defined in~(\ref{(8.2)}), where
$\tilde\chi_n(\cdot)$ denotes the Fourier transform of
the indicator function of the unit cube
$\prod\limits_{j=1}^\nu[n^{(j)},n^{(j)}+1)$,
$n=(n^{(1)},\dots n^{(\nu)})$.
Let us define the function
$$
K_0(x_1,\dots,x_k)=\sum_{l=1}^p c_l\tilde\chi_{n_l}(x_1+\cdots+x_k)
$$
and the measures~$\mu_N$ on $R^{k\nu}$ by the formula
\begin{eqnarray}
&&\mu_N(A)=\int_A|K_N(x_1,\dots,x_k)|^2 G_N(\,dx_1)\dots G_N(\,dx_k),
\nonumber \\
&&\qquad \quad A\in{\cal B}^{k\nu} \textrm{ and } N=0,1,\dots,
\label{(8.15)}
\end{eqnarray}
where $G_0$ is the vague limit of the measures~$G_N$.
To prove Theorem~8.2 it is enough to show that Lemma~8.3
can be applied with these spectral measures~$G_N$ and
functions~$K_N$. (We choose no exceptional rectangles~$P_j$
in this application of Lemma~8.3.) Since
$G_N\stackrel{v}{\rightarrow} G_0$, and $K_N\to K_0$
uniformly in all bounded regions in $R^{k\nu}$, it is
enough to show, beside the proof of Lemma~8.1, that
the measures $\mu_N$, $N=1,2,\dots$, tend weakly to the
(necessary finite) measure~$\mu_0$ which is also defined
in~(\ref{(8.15)}), (in notation
$\mu_N\stackrel{w}{\rightarrow}\mu_0$), i.e.
$\int f(x)\mu_N(\,dx)\to\int f(x)\mu_0(\,dx)$ for all
continuous and bounded functions~$f$ on~$R^{k\nu}$.
\index{weak convergence of probability measures}
Then this convergence implies condition~(b) in Lemma~8.3.
Moreover, it is enough to show the slightly weaker statement
by which there exists some finite measure $\bar\mu_0$ such
that $\mu_N\stackrel{w}{\rightarrow}\bar\mu_0$, since then
$\bar\mu_0$ must coincide with $\mu_0$ because of the
relations $G_N\stackrel{v}{\rightarrow} G_0$ and
$K_N\to K_0$ uniformly in all bounded regions of $R^{k\nu}$,
and $K_0$ is a continuous function.
There is a well-known theorem in probability theory about the
equivalence between weak convergence of finite measures and the
convergence of their Fourier transforms. It would be natural to
apply this theorem for proving
$\mu_N\stackrel{w}{\rightarrow}\bar\mu_0$. On the other hand, we have
the additional information that the measures $\mu_N$, $N=1,2,\dots$,
are concentrated in the cubes $[-N\pi,N\pi)^{k\nu}$, since the
spectral measure~$G$ is concentrated in $[-\pi,\pi)^\nu$. It
is more fruitful to apply a version of the above mentioned theorem,
where we can exploit our additional information. We formulate the
following
\medskip\noindent
{\bf Lemma 8.4.} {\it Let $\mu_1,\mu_2,\dots$ be a sequence
of finite measures on $R^l$ such that
$\mu_N(R^l\setminus [-C_N\pi,C_N\pi)^l)=0$
for all $N=1,2,\dots$, with some sequence $C_N\to\infty$ as
$N\to\infty$. Define the modified Fourier transform
\index{modified Fourier transform}
$$
\varphi_N(t)=\int_{R^l}
\exp\left\{i\left(\frac{[tC_N]}{C_N},x\right)\right\}
\mu_N(\,dx), \quad t\in R^l,
$$
where $[tC_N]$ is the integer part of the vector $tC_N\in R^l$.
(For an $x\in R^l$ its integer part $[x]$ is the vector
$n\in\textrm{\BBB Z}_l$ for which $x^{(p)}-10$ there exits some $K=K(\varepsilon)$ such that
\begin{equation}
\mu_N(x\colon\; x\in R^l,\; |x^{(1)}|>K)<\varepsilon
\quad \textrm{for all \ }N\ge1.
\label{(8.16)}
\end{equation}
As $\varphi(t)$ is continuous at the origin there is some $\delta>0$
such that
\begin{equation}
|\varphi(0,\dots,0)-\varphi(t,0,\dots,0)|<\frac\varepsilon2
\quad\textrm{if \ } |t|<\delta. \label{(8.17)}
\end{equation}
We have
\begin{equation}
0\le \textrm{Re}\,[\varphi_N(0,\dots,0)-\varphi_N(t,0,\dots,0)]
\le2\varphi_N(0,\dots,0) \label{(8.18)}
\end{equation}
for all $N=1,2,\dots$. The sequence in the middle term
of~(\ref{(8.18)}) tends to
$$
\textrm{Re}\,[\varphi(0,\dots,0)-\varphi(t,0,\dots,0)]
$$
as $N\to\infty$. The right-hand side of~(\ref{(8.18)}) is a
bounded function in the variable~$N$, since it is convergent.
Hence the dominated convergence theorem can be applied. We
get because of the condition $C_N\to\infty$ and
relation~(\ref{(8.17)}) that
\begin{eqnarray*}
&&\lim_{N\to\infty} \int_0^{[\delta C_N]/C_N} \frac1\delta\,
\textrm{Re}\,[\varphi_N(0,\dots,0)-\varphi_N(t,0,\dots,0)]\,dt\\
&&\qquad=\int_0^\delta\frac1\delta\,
\textrm{Re}\,[\varphi(0,\dots,0)-\varphi(t,0,\dots,0)]\,dt
<\frac\varepsilon2
\end{eqnarray*}
with this $\delta>0$. Hence
\begin{eqnarray*}
\frac\varepsilon2&>& \lim_{N\to\infty}
\int_0^{[\delta C_N]/C_N}\frac1\delta\,
\textrm{Re}\,
[\varphi_N(0,\dots,0)-\varphi_N(t,0,\dots,0)]\,dt \\
&=&\lim_{N\to\infty}\int
\left(\frac1\delta\int_0^{[\delta C_N]/C_N}
\textrm{Re}\, [1-e^{i[tC_N]x^{(1)}/C_N}]\,dt\right)
\mu_N(\,dx)\\
&=&\lim_{N\to\infty}\int
\frac1{\delta C_N}\sum_{j=0}^{[\delta C_N]-1}
\textrm{Re}\,\left[1-e^{ijx^{(1)}/C_N}\right]\mu_N(\,dx)\\
&\ge&\limsup_{N\to\infty} \int_{\{|x^{(1)}|>K\}}
\frac1{\delta C_N} \sum_{j=0}^{[\delta C_N]-1}
\textrm{Re}, \left[1-e^{ijx^{(1)}/C_N}\right]\mu_N(\,dx)\\
&=&\limsup_{N\to\infty}\int_{\{|x^{(1)}|>K\}}
\left(1-\frac1{\delta C_N}
\textrm{Re}\,\frac{1-e^{i[\delta C_N]x^{(1)}/C_N}}
{1-e^{ix^{(1)}/C_N}}\right)\mu_N(\,dx)
\end{eqnarray*}
with an arbitrary $K>0$. (In the last but one step of this
calculation we have exploited that
$\frac1{\delta C_N}\sum\limits_{j=0}^{[\delta C_N]-1}
\textrm{Re}\,[1-e^{ijx^{(1)}/C_N}]\ge0$ for all
$x^{(1)}\in R^1$.)
Since the measure $\mu_N$ is concentrated in
$\{x\colon\, x\in R^l,\;|x^{(1)}|\le C_N\pi\}$, and
\begin{eqnarray*}
\textrm{Re}\,\frac{1-e^{i[\delta C_N]x^{(1)}/C_N}}
{1-e^{ix^{(1)}/C_N}}
&=&\frac{\textrm{Re}\,\left(i e^{-ix^{(1)}/2C_N}
\left(1-e^{i[\delta C_N]x^{(1)}/C_N}\right)\right)}
{i(e^{-ix^{(1)}/2CN}-e^{ix^{(1)}/2C_N})}\\
&\le&\frac1{\left|\sin \left(\dfrac{x^{(1)}}{2C_N}\right)\right|}
\le \frac{C_N\pi}{|x^{(1)}|}
\end{eqnarray*}
if $|x^{(1)}|\le C_N\pi$, (here we exploit that
$|\sin u|\ge\frac2\pi|u|$ if $|u|\le\frac\pi2$), hence we have with
the choice $K=\frac{2\pi}{\delta}$
$$
\frac\varepsilon2>\limsup_{N\to\infty}\int_{\{|x^{(1)}|>K\}}
\left(1-\left|\frac\pi{\delta x^{(1)}}\right|\right)\mu_N(\,dx)
\ge\limsup_{N\to\infty}\frac12\mu_N(|x^{(1)}|>K).
$$
As the measures $\mu_N$ are finite the inequality
$\mu_N(|x^{(1)}|>K)<\varepsilon$ holds for each index~$N$ with a
constant~$K=K(N)$ that may depend on~$N$. Hence the above
inequality implies that formula~(\ref{(8.16)}) holds for all $N\ge1$
with a possibly larger index~$K$ that does not depend on~$N$.
Applying the same argument to the other coordinates we
find that for all $\varepsilon>0$ there exists some
$C(\varepsilon)<\infty$ such that
$$
\mu_N\left(R^l\setminus[-C(\varepsilon),
C(\varepsilon)]^l\right)<\varepsilon \quad
\textrm{for all } N=1,2,\dots.
$$
Consider the usual Fourier transforms
$$
\tilde\varphi_N(t)=\int_{R^l}e^{i(t,x)}\mu_N(\,dx), \quad t\in R^l.
$$
Then
\begin{eqnarray*}
|\varphi_N(t)-\tilde\varphi_N(t)|&\le& 2\varepsilon
+\int_{[-C(\varepsilon),C(\varepsilon)]}
\left|e^{i(t,x)}-e^{i([tC_N]/C_N,x)}\right|\mu_N(\,dx)\\
&\le& 2\varepsilon
+\frac{lC(\varepsilon)}{C_N}\mu_N(R^l)
\end{eqnarray*}
for all $\varepsilon>0$. Hence $\tilde\varphi_N(t)-\varphi_N(t)\to0$ as
$N\to\infty$, and $\tilde\varphi_N(t)\to\varphi(t)$. (Observe that
$\mu_N(R^l)=\varphi_N(0)\to\varphi(0)<\infty$ as $N\to\infty$, hence
the measures~$\mu_N(R^l)$ are uniformly bounded, and $C_N\to\infty$
by the conditions of Lemma~8.4.) Then Lemma~8.4 follows from standard
theorems on Fourier transforms. \hfill$\qed$
\medskip
We return to the proof of Theorem~8.2. We apply Lemma~8.4 with
$C_N=N$ and $l=k\nu$ for the measures $\mu_N$ defined
in~(\ref{(8.15)}). Because of the middle term in~(\ref{(8.14)})
we can write the modified Fourier transform $\varphi_N$ of the
measure $\mu_N$ as
$$
\varphi_N(t_1,\dots,t_k)=\sum_{r=1}^p\sum_{s=1}^p c_rc_s
\psi_N(t_1+n_r-n_s,\dots,t_k+n_r-n_s)
$$
with
\begin{eqnarray}
&&\psi_N(t_1,\dots,t_r)=\frac1{N^{2\nu}}\int\exp\left\{
i\frac1N((j_1,x_1)+\cdots+(j_k,x_k))\right\} \nonumber \\
&&\qquad\qquad\sum_{p\in B^N_0}\sum_{q\in B^N_0}
\exp\left\{i\left(\frac{p-q}N,x_1+\cdots+x_k\right)\right\}
G_N(\,dx_1)\dots G_N(\,dx_k) \nonumber \\
&&\qquad=\frac1{N^{2\nu-k\alpha}L(N)^k}\sum_{p\in B_0^N}\sum_{q\in B_0^N}
r(p-q+j_1)\cdots r(p-q+j_k), \label{(8.19)}
\end{eqnarray}
where $j_p=[t_pN]$, $t_p\in R^\nu$, $p=1,\dots,k$.
The asymptotical behaviour of $\psi_N(t_1,\dots,t_k)$ for
$N\to\infty$ can be investigated by the help of the last
relation and formula~(\ref{(8.1)}). Rewriting the last
double sum in the form of a single sum by fixing first
the variable $l=p-q\in [-N,N]^\nu\cap\textrm{\BBB Z}_\nu$,
and then summing up for~$l$ one gets
$$
\psi_N(t_1,\dots,t_k)=\int_{[-1,1]^\nu} f_N(t_1,\dots,t_k,x)\,dx
$$
with
\begin{eqnarray*}
&&f_N(t_1,\dots,t_k,x) \\
&&\qquad =\left(1-\frac{[|x^{(1)}N|]}N\right)\cdots
\left(1-\frac{[|x^{(\nu)}N|]}N\right)
\frac{r([xN]+j_1)}{N^{-\alpha}L(N)}\cdots
\frac{r([xN]+j_k)}{N^{-\alpha}L(N)}.
\end{eqnarray*}
(In the above calculation we exploited that in the last sum of
formula~(\ref{(8.19)}) the number of pairs $(p,q)$ for which
$p-q=l=(l_1,\dots,l_\nu)$ equals $(N-|l_1|)\cdots(N-|l_\nu|)$.)
It can be seen with the help of formula~(\ref{(8.1)}) that
for all $\varepsilon>0$ the convergence
\begin{equation}
f_N(t_1,\dots,t_k,x)\to f_0(t_1,\dots,t_k,x) \label{(8.20)}
\end{equation}
holds uniformly with the limit function
$$
f_0(t_1,\dots,t_k,x)=(1-|x^{(1)}|)\dots(1-|x^{(\nu)}|)
\frac{a\left(\frac{x+t_1}{|x+t_1|}\right)}{|x+t_1|^\alpha}\dots
\frac{a\left(\frac{x+t_k}{|x+t_k|}\right)}{|x+t_k|^\alpha}
$$
on the set $x\in[-1,1]^\nu\setminus
\bigcup\limits_{p=1}^k\{x\colon\;|x+t_p|>\varepsilon\}$.
We claim that
$$
\psi_N(t_1,\dots,t_k)\to\psi_0(t_1,\dots,t_k)
=\int_{[-1,1]^\nu}f_0(t_1,\dots,t_k,x)\,dx,
$$
and $\psi_0$ is a continuous function.
This relation implies that $\mu_N\stackrel{w}{\rightarrow}\mu_0$. To
prove it, it is enough to show beside formula~(\ref{(8.20)}) that
\begin{equation}
\left|\int_{|x+t_p|<\varepsilon}
f_0(t_1,\dots,t_k,x)\,dx\right|0$ and $C'>0$, since $\nu-k\alpha>0$,
and $a(\cdot)$ is a bounded function. Similarly,
\begin{eqnarray*}
\int_{|x+t_p|<\varepsilon} |f_N(t_1,\dots,t_k,x)|\,dx
&&\le \prod_{1\le l\le k,\,l\neq p}\left[\int_{x\in[-1,1]^\nu}
\frac{|r([xN]+j_l)|^k}{N^{-k\alpha}L(N)^k}\,dx\right]^{1/k},\\
&&\qquad \left[\int_{|x+t_p|\le\varepsilon}
\frac{|r([xN]+j_p)|^k}{N^{-k\alpha}L(N)^k}\,dx\right]^{1/k}.
\end{eqnarray*}
It is not difficult to see, by using Karamata's theorem, that if
$L(\cdot)$ is a slowly varying function which is bounded in all
finite intervals, then for all numbers $\eta>0$ and $K>0$ there
is a threshold index~$N_0$ and a number $C=C(N_0,\eta,K)$ such
that
$$
L(uN)\le Cu^{-\eta}L(N) \quad\textrm{for all } 0*__0$ with some constants
$B,B',B''<\infty$ depending on $\eta$ and $t_p$, $1\le p\le k$.
(Let us remark that~(\ref{(8.23)}) holds also for
$|[xN]+j_l|\le K_1$ with some $K_1>0$ independent of~$N$, i.e. when
the argument of $r(\cdot)$ is relatively small, because
$|r(n)|\le1$ for all $n\in\textrm{\BBB Z}_\nu$.) Therefore we get, by
choosing an $\eta>0$ such that $k(\alpha+\eta)<\nu$, the inequality
$$
\int_{|x+t_p|<\varepsilon} |f_N(t_1,\dots,t_k,x)|\,dx
\le C\varepsilon^{\nu/k-(\alpha +\eta)}
$$
with some $C<\infty$. The right-hand side of this
inequality tends to zero as $\varepsilon\to0$. Hence
we proved beside~(\ref{(8.20)}) formulae~(\ref{(8.21)})
and~(\ref{($8.22'$)}), therefore also the relation
$\mu_N\stackrel{w}{\rightarrow} \mu_0$. This means that with
our choice of the functions $K_N(\cdot)$ and measures $G_N$
all conditions of Lemma~8.3 are satisfied (if we know that also
Lemma~8.1 holds), and its application yields Theorem~8.2. Thus we
have proved Theorem~8.2 with the help of Lemma~8.1. \hfill$\qed$
It remains to prove Lemma~8.1.
\medskip\noindent
{\it Proof of Lemma~8.1.}\/ Introduce the notation
$$
K_N(x)=\prod_{j=1}^\nu\frac {e^{ix^{(j)}}-1}{N(e^{ix^{(j)}/N}-1)},
\quad N=1,2,\dots,
$$
and
$$
K_0(x)=\prod_{j=1}^\nu\frac {e^{ix^{(j)}}-1}{ix^{(j)}}.
$$
Let us consider the measures $\mu_N$ defined in
formula~(\ref{(8.15)}) in the special case $k=1$, $p=1$,
$c_1=1$. Then
$$
\mu_N(A)=\int_A |K_N(x)|^2\,G_N(\,dx).
$$
We have already seen in the proof of Theorem~8.2 that
$\mu_N\stackrel{w}{\rightarrow} \mu_0$ with some finite measure~$\mu_0$,
and the Fourier transform of~$\mu_0$ is
$$
\varphi_0(t)=\int_{[-1,1]^\nu}(1-|x^{(1)}|)\cdots (1-|x^{(\nu)}|)
\frac{a\left(\frac{x+t}{|x+t|}\right)}{|x+t|^\alpha}\,dx.
$$
Moreover, since $|K_N(x)|^2\to |K_0(x)|^2$ uniformly in
any bounded domain, it is natural to expect that
$G_N\stackrel{v}{\rightarrow}G_0$ with
$G_0(\,dx)=\frac1{|K_0(x)|^2}\mu_0(\,dx)$. But since
$K_0(x)=0$ in some points a direct proof of this
statement would be difficult. Hence we choose a
different approach to avoid these difficulties. First
we prove a result about the vague convergence of the
restrictions of the measures $G_N$ to appropriate cubes.
We show that for all $T\ge1$ there is a finite measure
$G_0^T$ concentrated on $(-T\pi,T\pi)^\nu$ such that
\begin{equation}
\lim_{N\to\infty}\int f(x)\,G_N(\,dx)=\int f(x)\,G_0^T(\,dx)
\label{(8.24)}
\end{equation}
for all continuous functions~$f$ which vanish outside the cube
$(-T\pi,T\pi)^\nu$.
Let a continuous function $f$ vanish outside the cube
$(-T\pi, T\pi)^\nu$ with some $T\ge1$. Let $M=[\frac N{2T}]$.
Then
\begin{eqnarray*}
\int f(x)G_N(\,dx)&&=\frac{N^\alpha}{L(N)}
\cdot \frac{L(M)}{M^\alpha}
\int f\left(\frac NMx\right)G_M(\,dx)\\
&&=\frac{N^\alpha L(M)}{M^\alpha L(N)}\int f
\left(\frac NMx\right) |K_M(x)|^{-2}\mu_M(\,dx)\\
&&\qquad\qquad \to (2T)^\alpha
\int f(2Tx)|K_0(x)|^{-2}\mu_0(\,dx)\\
&&\qquad\qquad\qquad
=\int f(x)\frac{(2T)^\alpha}{|K_0(\frac x{2T})|^2}\mu_0
\left(\,\frac{dx}{2T}\right)
\quad \textrm{as }N\to\infty,
\end{eqnarray*}
because $f(\frac NMx)|K_M(x)|^{-2}$ vanishes outside the cube
$[-\pi,\pi]^\nu$,
$$
f(\frac NMx)|K_M(x)|^{-2}\to f(2Tx)|K_0(x)|^{-2} \quad
\textrm{uniformly,}
$$
(the function~$K_0(\cdot)^{-2}$ is continuous in the cube
$[-\pi,\pi]^\nu$,) and $\mu_M\stackrel{w}{\rightarrow}\mu_0$
as $N\to\infty$. Hence relation~(\ref{(8.24)}) holds. The
measures $G_0^T$ appearing in~(\ref{(8.24)}) are consistent
for different parameters~$T$, i.e.\ $G_0^T$ is the
restriction of the measure $G_0^{T'}$ to the cube
$(-T\pi,T\pi)^\nu$ if $T'>T$. This follows from the fact
that $\int f(x)G_0^T(\,dx)=\int f(x)G_0^{T'}(\,dx)$ for
all continuous functions with support in $(-T,T)^\nu$.
It can be seen with the help of these facts that there is
a locally finite measure $G_0$ on $R^\nu$ such that
$G_0^T$ is its restriction to the cube $(-T\pi,T\pi)^\nu$,
and $G_N\stackrel{v}{\rightarrow} G_0$.
Let us briefly explain why such a $\sigma$-finite measure
$G_0$ exists. The main problem is to show that the natural
candidate for $G_0$ is really a ($\sigma$-additive) measure.
To show this let us represent the measures $G_0^T$ as the
Lebesgue--Stieltjes measures of appropriate functions
$\bar G_0^T(x)$, $x\in R^\nu$. To define these functions
let us fix a number $a=(a_1,\dots,a_\nu)$ such that the
hyperplanes $x_j=a_j$ have zero $G_0^T$ measures for all
$1\le j\le\nu$ and $T$. Then we can define the functions
$G_0^T$ as $\bar G_0^T(x_1,\dots,x_\nu)=(-1)^{\alpha(x)}
G_0^T\left(\prod\limits_{j=1}^\nu[a_j,x_j)\right)$,
where $\alpha(x)$ denotes for a vector $x=(x_1,\dots,x_\nu)$
the number of coordinates $j$, $1\le j\le\nu$, such that
$x_j0$ and continuous functions~$f$ with compact support that
\begin{eqnarray*}
\int f(x) G_0(\,dx)&=&\lim_{u\to\infty}\int f(x)\,G_u(\,dx)
=\lim_{u\to\infty}\frac{s^\alpha L(\frac us)}{L(u)}
\int f(sx)G_{\frac us}(\,dx)\\
&=&s^\alpha \int f(sx)G_0(\,dx)=\int f(x)s^\alpha
G_0 \left(\frac{dx}s\right).
\end{eqnarray*}
This identity implies the homogeneity property~(\ref{(8.3)})
of~$G_0$. Lemma~8.1 is proved. \hfill$\qed$
\medskip
The next result is a generalization of Theorem~8.2.
\medskip\noindent
{\bf Theorem~8.2$'$.} {\it Let $X_n$, $n\in\textrm{\BBB Z}_\nu$, be a
stationary Gaussian field with a correlation function $r(n)$ defined
in~(\ref{(8.1)}). Let $H(x)$ be a real function with the properties
$EH(X_n)=0$ and $EH(X_n)^2<\infty$. Let us consider the Fourier
expansion
\begin{equation}
H(x)=\sum_{j=1}^\infty c_jH_j(x), \quad \sum c_j^2j!<\infty,
\label{(8.25)}
\end{equation}
of the function~$H(\cdot)$ by the Hermite polynomials~$H_j$ (with
leading coefficients~1). Let $k$ be the smallest index in this
expansion such that $c_k\neq0$. If $0\nu$. Let
me remark that in the case $k\alpha\ge\nu$ the field $Z^*_n$,
$n\in\textrm{\BBB Z}_\nu$, which appeared in the limit in
Theorem~$8.2'$ does not exist. The Wiener-It\^o integral
defining $Z^*_n$ is meaningless, because the integral which
should be finite to guarantee the existence of the Wiener--It\^o
integral is divergent in this case. Next I formulate a general
result which contains the answer to the above question as a
special case.
\medskip\noindent
{\bf Theorem~8.5.} {\it Let us consider a stationary Gaussian random
field $X_n$, $EX_n=0$, $EX_n^2=1$, $n\in\textrm{\BBB Z}_n$, with
correlation function $r(n)=EX_mX_{m+n}$, $m,n\in\textrm{\BBB Z}_\nu$.
Take a function $H(x)$ on the real line such that $EH(X_n)=0$ and
$EH(X_n)^2<\infty$. Take the Hermite expansion~(\ref{(8.25)}) of the
function~$H(x)$, and let $k$ be smallest index in this expansion
such that $c_k\neq 0$. If
\begin{equation}
\sum_{n\in\textrm{\BBB Z}_\nu}|r(n)|^k<\infty, \label{(8.26)}
\end{equation}
then the limit
$$
\lim_{N\to\infty} EZ^N_n(H_l)^2=\lim_{N\to\infty}
N^{-\nu}\sum_{i\in B^N_n}\sum_{j\in B^N_n}r^l(i-j)=\sigma_l^2l!
$$
exists for all indices $l\ge k$, where $Z^N_n(H_l)$ is defined
in~(\ref{(1.1)}) with $A_N=N^{\nu/2}$, and $\xi_n=H_l(X_n)$ with
the $l$-th Hermite polynomial $H_l(x)$ with leading coefficient~1.
Moreover, also the inequality
$$
\sigma^2=\sum_{l=k}^\infty c_l^2l!\sigma_l^2<\infty
$$
holds.
The finite dimensional distributions of the random field $Z^N_n(H)$
defined in~(\ref{(1.1)}) with $A_N=N^{\nu/2}$ and $\xi_n=H(X_n)$
tend to the finite dimensional distributions of a random field
$\sigma Z^*_n$ with the number~$\sigma$ defined in the previous
relation, where $Z^*_n$, $n\in\textrm{\BBB Z}_\nu$, are independent,
standard normal random variables.}
\medskip
Theorem 8.5 can be applied if the conditions of Theorem~$8.2'$
hold with the only modification that the condition $k\alpha<\nu$
is replaced by the relation $k\alpha>\nu$. In this case the
relation~(\ref{(8.26)}) holds, and the large-scale limit of the random
field $Z^N_n$, $n\in\textrm{\BBB Z}_\nu$ with normalization
$A_N=N^{\nu/2}$ is a random field consisting of independent
standard normal random variables multiplied with the
number~$\sigma$. There is a slight generalization of
Theorem~8.5 which also covers the case $k\alpha=\nu$. In this
result we assume instead of the condition~(\ref{(8.26)}) that
$\sum\limits_{n\in \bar B_N} r(n)^k=L(N)$ with a slowly varying
function $L(\cdot)$, where
$\bar B_N=\{(n_1,\dots,n_\nu)\in\textrm{\BBB Z}_\nu\colon\;
-N\le n_j\le N,\; 1\le j\le\nu\}$, and some additional
condition is imposed which states that an appropriately defined
finite number $\sigma^2=\lim\limits_{N\to\infty}\sigma_N^2$, which
plays the role of the variance of the random variables in the
limiting field, exists. There is a similar large scale limit in
this case as in Theorem~8.5, the only difference is that the
norming constant in this case is $A_N=N^{\nu/2}L(N)^{1/2}$. This
result has the consequence that if the conditions of
Theorem~$8.2'$ hold with the only difference that $k\alpha=\nu$
instead of $k\alpha<\nu$, then the large scale limit exists
with norming constants $A_N=N^{\nu/2}L(N)$ with an appropriate
slowly varying function~$L(\cdot)$, and it consists of
independent Gaussian random variables with expectation zero.
The proof of Theorem~8.5 and its generalization that we did not
formulate here explicitly appeared in paper~\cite{r3}. I omit
its proof, I only make some short explanation about it.
In the proof we show that all moments of the random variables
$Z^N_n$ converge to the corresponding moments of the random
variables $Z^*_n$ as $N\to\infty$. The moments of the random
variables $Z_n^N$ can be calculated by means of the diagram
formula if we either rewrite them in the form of a Wiener--It\^o
integral or apply a version of the diagram formula which gives
the moments of Wick polynomials instead of Wiener--It\^o
integrals. In both cases the moments can be expressed explicitly
by means of the correlation function of the underlying Gaussian
random field. The most important step of the proof is to show
that we can select a special subclass of (closed) diagrams,
called regular diagrams in~\cite{r3} which yield the main
contribution to the moment $E(Z^N_n)^M$, and their contribution
can be simply calculated. The contribution of all remaining
diagrams is~$o(1)$, hence it is negligible. For the sake of
simplicity let us restrict our attention to the case
$H(x)=H_k(x)$, and let us explain the definition of the
regular diagrams in this special case. If $M$ is an
even number, then take the partitions $\{k_1,k_2\}$,
$\{k_3,k_4\}$,\dots, $\{k_{M-1},k_M\}$ of the set
$\{1,\dots,M\}$ to subsets consisting of exactly two
elements, to define the regular diagrams. They are
those (closed) diagrams for which we can choose one
of the above partitions in such a way that the
diagram contains only edges connecting vertices from
the $k_{2j-1}$-th and $k_{2j}$-th row with some
$1\le j\le \frac M2$, where $\{k_{2j-1},k_{2j}\}$ is
an element of the partition we have chosen. If $M$ is
an odd number, then there is no regular diagram.
\medskip
In Theorems~8.2 and~$8.2'$ we investigated some very special
subordinated fields. The next result shows that the same
limiting field as the one in Theorem~8.2 appears in a much
more general situation.
Let us define the field
\begin{equation}
\xi_n=\sum_{j=k}^\infty\frac1{j!}\int e^{i(n,x_1+\cdots+x_j)}
\alpha_j(x_1,\dots,x_j)\,Z_G(\,dx_1)\dots Z_G(\,dx_j),
\quad n\in\textrm{\BBB Z}_\nu, \label{(8.27)}
\end{equation}
where $Z_G$ is the random spectral measure adapted to a Gaussian
field~$X_n$, $n\in\textrm{\BBB Z}_\nu$, with correlation function
satisfying~(\ref{(8.1)}) with $0<\alpha<\frac \nu k$.
\medskip\noindent
{\bf Theorem 8.6.} {\it Let the fields $Z^N_n$ be defined by
formulae~(\ref{(8.27)}) and~(\ref{(8.1)}) with $A_N=N^{\nu-k\alpha/2}$. The
multi-dimensional distributions of the fields $Z_n^N$ tend to
those of the field $\alpha_k(0,\dots,0)Z^*_n$ where the field~$Z^*_n$
is the same as in Theorem~8.2 if the following conditions are
fulfilled:
\medskip
\begin{description}
\item[(i)]$\alpha_k(x_1,\dots,x_k)$ is a bounded function, continuous
at the origin, and such that \newline
$\alpha_k(0,\dots,0)\neq0$.
\item[(ii)]
\begin{eqnarray*}
\sum_{j=k=1}^\infty\frac1{j!}\frac{N^{-(j-k)\alpha}}{L(N)^{j-k}}
\int_{ R^{j\nu}}
&&\left|\alpha_j\left(\frac{x_1}N,\dots,\frac{x_j}N\right)\right|^2
\frac1{N^{2\nu}}
\left|\sum_{j\in B_0^N}e^{i(l/N,x_1+\cdots+x_j)}\right|^2 \\
&&\qquad G_N(\,dx_1)\dots G_N(\,dx_j)\to0,
\end{eqnarray*}
where $G_N$ is defined in~(\ref{(8.2)}).
\end{description}
}
\medskip\noindent
{\it Proof of Theorem 8.6.}\/ The proof is very similar to those of
Theorem~8.2 and~$8.2'$. The same argument as in the proof of
Theorem~$8.2'$ shows that because of condition~(ii) $\xi_n$ can be
substituted in the present proof by the following expression:
$$
\xi_n'=\frac1{k!}\int e^{i(n,x_1+\cdots+x_k)}\alpha_k(x_1,\dots,x_k)
Z_G(\,dx_1)\dots Z_G(\,dx_k), \quad n\in\textrm{\BBB Z}_\nu.
$$
Then a natural modification in the proof of Theorem~8.2 implies
Theorem~8.6. The main point in this modification is that we have to
substitute the measures~$\mu_N$ defined in formula~(\ref{(8.15)}) by the
following measure $\bar\mu_N$:
\begin{eqnarray*}
\bar\mu_N(A)&&=\int_A|K_N(x_1,\dots,x_k)|^2
\left|\alpha_k\left(\frac {x_1}N,\dots,\frac{x_k}N\right)\right|^2
G_N(\,dx_1)\dots G_N(\,dx_k), \\
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad A\in{\cal B}^{k\nu},
\end{eqnarray*}
and to observe that because of condition~(i) the limit relation
$\mu_N\stackrel{w}{\rightarrow}\mu_0$ implies that
$\bar\mu_N\stackrel{w}{\rightarrow}|\alpha_k(0,\dots,0)|^2\mu_0$.
\hfill$\qed$
\medskip
The main problem in applying Theorem~8.6 is to check conditions~(i)
and~(ii). We remark without proof that any field
$\xi_n=H(X_{s_1+n},\dots,X_{s_p+n})$,
$s_1,\dots,s_p\in\textrm{\BBB Z}_\nu$ and $n\in\textrm{\BBB Z}_\nu$,
for which $E\xi_n^2<\infty$ satisfies condition~(ii). This is
proved in Remark~6.2 of~\cite{r9}. If the conditions~(i) or~(ii)
are violated, then a limit of different type may appear.
Finally we quote such a result without proof. Actually the
proof is similar to that of Theorem~8.2. At this point the
general formulation of Lemma~8.3 is useful. (See~\cite{r24}
for a proof.) Here we restrict ourselves to the case $\nu=1$.
The limiting field appearing in this result belongs to the
class of self-similar fields constructed in Remark~6.5.
Let $a_n$, $n=\dots,-1,0,1,\dots$, be a sequence of real numbers
such that
\begin{equation}
\begin{array}{l}
a_n=C(1)n^{-\beta-1}+o(n^{-\beta-1})\quad \textrm{if } n\ge 0 \\
a_n=C(2)|n|^{-\beta-1}+o(|n|^{-\beta-1})\quad \textrm{if } n< 0
\end{array}
\qquad -1<\beta<1. \label{(8.28)}
\end{equation}
Let $X_n$, $n=\dots,-1,0,1,\dots$, be a stationary Gaussian
sequence with correlation function
$r(n)=EX_0X_n=|n|^{-\alpha}L(|n|)$, \ $0<\alpha<1$, where
$L(\cdot)$ is a slowly varying function. Define the field
$\xi_n$, $n=\dots,-1,0,1,\dots$, as
\begin{equation}
\xi_n=\sum_{m=-\infty}^\infty a_mH_k(X_{m+n}). \label{(8.29)}
\end{equation}
\medskip\noindent
{\bf Theorem 8.7.} {\it Let a sequence $\xi_n$,
$n=\dots,-1,0,1,\dots$, be defined by~(\ref{(8.28)})
and (\ref{(8.29)}). Let
$0\beta>-1$.
\item[(c)] $\beta=0$, $C(1)=-C(2)$, and
$\sum\limits_{n=0}^\infty|a_n+a_{-n}|<\infty$.
\end{description}
\medskip\noindent
Let us define the sequences $Z^N_n$ by formula~(\ref{(1.1)})
with $A_N=N^{1-\beta-k\alpha/2}L(N)^{k/2}$ and the above defined
field~$\xi_n$. The multi-dimensional distributions of the
sequences~$Z^N_n$ tend to those of the sequences
$D^{-k}Z^*_n(\alpha,\beta,a,b,c)$, where
\begin{eqnarray*}
Z^*_n(\alpha,\beta,k,b,c)&&=\int\tilde\chi_n(x_1+\cdots+x_k)\\
&&\qquad \left[b|x_1+\cdots+x_k|^\beta+ic|x_1+\cdots+x_k|^\beta
\textrm{\rm sign}\,(x_1+\cdots+x_k)\right] \\
&&\qquad\qquad |x_1|^{(\alpha-1)/2}\cdots |x_k|^{(\alpha-1)/2}\,
W(\,dx_1)\dots W(\,dx_k),
\end{eqnarray*}
$W(\cdot)$ denotes the white noise field, i.e. a random
spectral measure corresponding to the Lebesgue measure,
and the constants~$D$, $b$ and~$c$ are defined as
$D=2\Gamma(\alpha)\cos(\frac\alpha2\pi)$, and
\medskip
\begin{description}
\item[ ]
$b=2[C(1)+C(2)]\Gamma(-\beta)\sin(\frac{\beta+1}2\pi)$,
and $c=2[C(1)-C(2)]\Gamma(-\beta)\cos(\frac{\beta+1}2\pi)$
in cases~(a) and~(b), and
\item[ ] $b=\sum\limits_{n=-\infty}^\infty a_n$, and $c=C(1)$
in case~(c).
\end{description}
}
\chapter{History of the problems. Comments}
{\it Chapter 1.}
\medskip\noindent
In statistical physics the problem formulated in this chapter
appeared at the investigation of some physical models at
critical temperature. A discussion of this problem and
further references can be found in the fourth chapter of
the forthcoming book of Ya.~G.~Sinai~\cite{r33}. (Here and
in the later part of Chapter~9 we did not change the text
of the first edition. Thus expressions like forthcoming
book, recent paper, etc. refer to the time when the original
version of this Lecture Note appeared.) The first example
of a limit theorem for partial sums of random variables
which is considerably different from the independent
case was given by M.~Rosenblatt in~\cite{r28}. Further
results in this direction were proved by R.~L.~Dobrushin,
H.~Kesten and F.~Spitzer, P.~Major, M.~Rosenblatt and
M.~S.~Taqqu \cite{r7}, \cite{r8}, \cite{r9}, \cite{r24},
\cite{r29}, \cite{r30}, \cite{r34}, \cite{r37}. In most
of these papers only the one-dimensional case is
considered, i.e. the case when $R^\nu=R^1$, and it is
formulated in a different but equivalent way. The joint
distribution of the random variables
$A_N^{-1}\sum\limits_{j=1}^{Nt]}\xi_j$,
$00$, with
some $\alpha>0$. Then, as it is not difficult to see,
the random processes $Y(t)$, $t>0$, and
$\frac{Y(ut)}{u^\alpha}$, $t>0$, have the same finite
dimensional distributions for all $u>0$. This can be
interpreted so that $Y(t)$ is a self-similar process
with parameter~$\alpha>0$ on the half-line $t>0$.
Contrariwise, if the finite dimensional distributions
of the processes $Y(t)$ and $\frac{Y(ut)}{u^\alpha}$,
$t>0$, agree for all $u>0$, then the process
$X(t)=\frac{X(e^t)}{e^{\alpha t}}$, $t\in R^1$, is
stationary. These relations show some connection
between stationary and self-similar processes. But
they have a rather limited importance in the
investigations of this work, because here we are
really interested in such random fields which are
simultaneously stationary and self-similar.
\medskip\noindent
{\it Chapter 2.}
\medskip\noindent
Wick polynomials are widely used in the literature
of statistical physics. A detailed discussion about
Wick polynomials can be found in~\cite{r12}.
Theorems~2A and~2B are well-known, and they can be
found in the standard literature. Theorem~2C can be
found e.g. in Dynkin's book~\cite{r13} (Lemma~1.5).
Theorem~2.1 is due to Segal~\cite{r31}. It is closely
related to a result of Cameron and Martin~\cite{r4}.
The remarks at the end of the chapter about the
content of formula~2.1 are related to~\cite{r25}.
\medskip\noindent
{\it Chapter 3.}
\medskip\noindent
Random spectral measures were independently introduced
by Cramer and Kolmogorov \cite{r5}, \cite{r20}. They
could have been introduced by means of Stone's theorem
about the spectral representation of one-parameter
groups of unitary operators. Bochner's theorem can be
found in any standard book on functional analysis,
the proof of the Bochner--Schwartz theorem can be
found in~\cite{r15}. Let me remark that the same result
holds true if the space of test functions~${\cal S}$ is
substituted by~${\cal D}$.
\medskip\noindent
{\it Chapter 4.}
\medskip\noindent
The stochastic integral defined in this chapter is a
version of that introduced by~It\^o in~\cite{r18}.
This modified integral first appeared in Totoki's
lecture note~\cite{r38} in a special form. Its
definition is a little bit more difficult than the
definition of the original stochastic integral
introduced by It\^o, but it has the advantage that
the effect of the shift transformation can be better
studied with its help. Most results of this chapter
can be found in Dobrushin's paper~\cite{r7}. The
definition of Wiener--It\^o integrals in the case
when the spectral measure may have atoms is new. In
the new version of this lecture note I worked out
many arguments in a more detailed form than in the
old text. In particular, in Lemma~4.1 I gave a much
more detailed explanation of the statement that all
kernel functions of Wiener--It\^o integrals can be
well approximated by simple functions.
\medskip\noindent
{\it Chapter 5.}
\medskip\noindent
Proposition~5.1 is proved for the original Wiener--It\^o
integrals by It\^o in~\cite{r18}. Lemma~5.2 contains a
well-known formula about Hermite polynomials. The main
result of this chapter, Theorem~5.3, appeared in
Dobrushin's work~\cite{r7}. The proof given there is
not complete. Several non-trivial details are omitted.
I felt even necessary to present a more detailed proof
in this note when I wrote down its new version.
Theorem~5.3 is closely related to Feynman's diagram
formula. The result of Corollary~5.5 was already known
at the beginning of the XX. century. It was proved
with the help of some formal manipulations. This formal
calculation was justified by Taqqu in~\cite{r35} with
the help of some deep inequalities. In the new version
of this note I formulated a more general result than
in the older one. Here I gave a formula about the
moment of products of Wick polynomials and not only
of Hermite polynomials.
I could not find results similar to Propositions~5.6
and~5.7 in the literature of probability theory. On the
other hand, such results are well-known in statistical
physics, and they play an important role in constructive
field theory. A sharpened form of these results is
Nelson's deep hypercontractive inequality~\cite{r27},
which I formulate below.
\index{Nelson's hypercontractive inequality}
Let $X_t$, $t\in T$, and $Y_{t'}$, $t'\in T'$ be two sets
of jointly Gaussian random variables on some probability
spaces $(\Omega,{\cal A},P)$ and $(\Omega,{\cal A}',P')$.
Let ${\cal H}_1$ and ${\cal H}_1'$ be the Hilbert
spaces generated by the finite linear combinations
$\sum c_jX_{t_j}$ and $\sum c_jY_{t'_j}$. Let us define
the $\sigma$-algebras ${\cal B}=\sigma(X_t,\,t\in T)$
and ${\cal B}'=\sigma(Y_{t'},\,t'\in T')$ and the
Banach spaces $L_p(X)=L_p(\Omega,{\cal B},P)$,
$L_p(Y)=L_p(\Omega',{\cal B}',P')$, $1\le p\le\infty$.
Let $A$ be linear transformation from ${\cal H}_1$ to
${\cal H}_1'$ with norm not exceeding~1. We define an
operator $\Gamma(A)\colon L_p(X)\to L_{p'}(Y)$ for all
$1\le p,p'\le\infty$ in the following way. If $\eta$
is a homogeneous polynomial of the variables~$X_t$,
$$
\eta=\sum C_{j_1,\dots,j_s}^{t_1,\dots,t_s}
X^{j_1}_{t_1}\cdots X^{j_s}_{t_s}, \quad t_1,\dots,t_s\in T,
$$
then
$$
\Gamma(A)\colon\!\eta\!\colon=
\sum C_{j_1,\dots,j_s}^{t_1,\dots,t_s}
\colon\!(AX_{t_1})^{j_1}\cdots (AX_{t_s})^{j_s}\!\colon.
$$
It can be proved that this definition is meaningful, i.e.
$\Gamma(A)\colon\!\eta\!\colon$ does not depend on the
representation of $\eta$, and $\Gamma(A)$ can be extended
to a bounded operator from $L_1(X)$ to $L_1(Y)$ in a unique
way. This means in particular that $\Gamma(A)\xi$ is defined
for all $\xi\in L_p(X)$, $p\ge1$. Nelson's hypercontractive
inequality says the following. Let $A$ be a contraction
from ${\cal H}_1$ to ${\cal H}_1'$. Then $\Gamma(A)$ is a
contraction from $L_q(X)$ to $L_p(Y)$ for $1\le q\le p$
provided that
\begin{equation}
\|A\|\le\left( \frac{q-1}{p-1}\right)^{1/2}. \label{($+$)}
\end{equation}
If~(\ref{($+$)}) does not hold, then $\Gamma(A)$ is not a
bounded operator from $L_q(X)$ to $L_p(Y)$.
A further generalization of this result can be found
in~\cite{r16}.
The following discussion may help to understand the
relation between Nelson's hypercontractive inequality
and Corollary~5.6. Let us apply Nelson's inequality in
the special case when $(X_t,\,t\in T)=(Y_{t'},\,t'\in T')$
is a stationary Gaussian field with spectral measure~$G$,
$q=2$, $p=2m$ with some positive integer~$m$,
$A=c\cdot\textrm{Id}$, where $\textrm{Id}$ denotes
the identity operator, and $c=(2m-1)^{-1/2}$. Let
${\cal H}^c$ and ${\cal H}^c_n$ be the complexification
of the real Hilbert spaces ${\cal H}$ and ${\cal H}_n$
defined in Chapter~2. Then
$L_2(X)={\cal H}^c={\cal H}^c_0+{\cal H}_1^c+\cdots$
by Theorem~2.1 and formula~2.1. The operator
$\Gamma(c\cdot\textrm{Id}\,)$ equals
$c^n\cdot\textrm{Id}$ on the subspace ${\cal H}_n^2$.
If $h_n\in{\cal H}^n_G$, then $I_G(h_n)\in {\cal H}_n$,
hence the application of Nelson's inequality for the
operator $A=c\cdot\textrm{Id}$ shows that
$$
\left(EI_G(h_n)^{2m}\right)^{1/2m}
=c^{-n}\left(E(\Gamma(c\cdot\textrm{Id})
I_G(h_n))^{2m}\right)^{1/2m}
\le c^{-n}\left(EI_G(h_n)^2\right)^{1/2}
$$
i.e.
$$
EI_G(h_n)^{2m}\le c^{-2nm}\left(EI_G(h_n)^2\right)^m
=(2m-1)^{mn}\left(EI_G(h_n)^2\right)^m.
$$
This inequality is very similar to the second inequality
in Corollary~5.6, only the multiplying constants are
different. Moreover, for large~$m$ these multiplying
constants are near to each other. I remark that the
following weakened form of Nelson's inequality could
be deduced relatively easily from Corollary~5.6. Let
$A\colon\;{\cal H}_1\to{\cal H}'_1$ be a contraction
$\|A\|=c<1$. Then there exists a $\bar p=\bar p(c)>2$
such that $\Gamma(A)$ is a bounded operator from
$L_2(X)$ to $L_p(Y)$ for $p<\bar p$. This weakened
form of Nelson's inequality is sufficient in many
applications.
\medskip\noindent
{\it Chapter 6.}
\medskip\noindent
Theorems~6.1,~6.2 and Corollary~6.4 were proved by
Dobrushin in~\cite{r7}. Taqqu proved similar results
in~\cite{r36}, but he gave a different representation.
Theorem~6.6 was proved by H.~P.~Mc.Kean in~\cite{r26}.
The proof of the lower bound uses some ideas
from~\cite{r14}. Remark~6.5 is from~\cite{r24}. As
Proposition~6.3 also indicates, some non-trivial
problems about the convergence of certain integrals
must be solved when constructing self-similar fields.
Such convergence problems are common in statistical
physics. To tackle such problems the so-called power
counting method (see e.g.~\cite{r22}) was worked out.
This method could also be applied in this chapter.
Part~(b) of Proposition~6.3 implies that the
self-similarity parameter~$\alpha$ cannot be chosen
in a larger domain in Corollary~6.4. One can ask
about the behaviour of the random variables $\xi_j$
and $\xi(\varphi)$ defined in Corollary~6.4 if the
self-similarity parameter~$\alpha$ tends to the
critical value~$\frac\nu2$. The variance of the
random variables~$\xi_j$ and $\xi(\varphi)$ tends
to infinity in this case, and the fields $\xi_j$,
$j\in\textrm{\BBB Z}_\nu$, and $\xi(\varphi)$,
$\varphi\in{\cal S}$, tend, after an appropriate
renormalization, to a field of independent normal
random variables in the discrete, and to a white
noise in the continuous case. The proof of these
results with a more detailed discussion will
appear in~\cite{r10}.
In a recent paper~\cite{r19} Kesten and Spitzer have
proved a limit theorem, where the limit field is a
self-similar field which seems not to belong to
the class of self-similar fields constructed in
Chapter~6. (We cannot however, exclude the
possibility that there exists some self-similar
field in the class defined in Theorem~6.2 with
the same distribution as this field, although it
is given by a completely different form.) This
self-similar field constructed by Kesten and
Spitzer is the only rigorously constructed
self-similar field known for us that does not
belong to the fields constructed in Theorem~6.2.
I describe this field, and then I make some
comments.
Let $B_1(t)$ and $B_2(t)$, $-\infty0
$$
because of the relation
$B_1(\lambda u)\stackrel{\Delta}{=}\lambda^{1/2}B_1(u)$.
Hence
$$
\sum_{j=0}^{n-1}Z_j\stackrel{\Delta}{=}n^{1/2}
\int K(n^{-1/2}x,0,1)B_2(\,dx)\stackrel{\Delta}{=}
n^{3/4}\int K(x,0,1)\,B_2(\,dx)=n^{3/4}Z_0.
$$
The invariance of the multi-dimensional distributions
of the field~$Z_n$ under the
transformation~(\ref{(1.1)}) can be seen similarly.
To see the stationarity of the field~$Z_n$ we need
the following two observations.
\medskip
\begin{description}
\item[(a)]$ K(x,s,t)\stackrel{\Delta}{=}K(x+\eta(s),0,t-s)$
with $\eta(s)=-B_1(-s)$. (The form of $\eta$ is not
important for us. What we need is that the pair $(\eta,K)$
is independent of $B_2$.)
\item[(b)] If $\alpha(x)$, $-\infty__