\magnification=\magstep1
\input amstex
\documentstyle{amsppt}
\font \BIg=cmbx10 scaled 1200
\font \eightrom=cmr8
\nopagenumbers
\leftheadtext\nofrills{{\rightline{\eightrom P\'eter Major}}}
\rightheadtext\nofrills{\leftline{\eightrom Poisson Law for the
Number of Lattice Points}}
\parskip=2pt plus 1pt
\parindent=20pt
\TagsOnRight
\define\r{\Bbb R^2}
\define\C{\Bbb C}
\define\f{\varphi}
\define\g{\gamma}
\define\e{\varepsilon}
\define\z{\Bbb Z^2}
\define\a{\alpha}
\define\th{\theta}
\define\OR{{\Bbb O}_R}
\define\de{\delta}
\define\J{\Cal J}
\define\DD{\Bbb {D}}
\define\HH{\Cal H_n}
\define\F{{\Bbb F}}
\define\FI {\Cal F^1_n}
\define\FO {\Cal F_n^2}
\define\bm{\text{{\bf m}}}
\define\BM{\bold m=(m_1,\dots,m_k)}
\define\bbm{\bar{\text{{\bf m}}}}
\define\BBM{\bar{\bold m}=(\bar m_1,\dots,\bar m_k)}
\define\M{\Cal M_n}
\define\llog{C(\log n)^{(k-1)\beta\tau}}
\define\ldotsl{l_1,\dots,l_k,\bar l_1,\dots,\bar l_k}
\define\fff{|\f(m_1)-\f(\bar m_1)|}
\define\pprod{\sideset \and '\to \prod}
\define\ffrac{\left(\frac n{|l_s-\bar l_s|}\right)}
\define\zmm{\zeta^{\bm,\bar m}}
\define\bmm{{\Bbb B}^{\bm,\bar m}}
\define\xdotsx{x_1,\dots,x_k}
\define\fdotsf{\f(m_1),\dots,\f(m_k)}
\define\fdotsx{f(\f(m_1))=x_1,\dots,f(\f(m_k))=x_k}
\define\ssum#1{\sideset \and^{(#1)}\to \sum}
\define\tl{\tilde l}
\define\nlt{\left(\frac n{\tl_s}\right)^{\tau-1}}
\null \vskip3mm \noindent
\hsize=14.5truecm
{\BIg Poisson Law for the Number of Lattice Points
\newline
in a Random Strip with Finite Area}
\smallskip
\noindent
P\'eter Major \newline {\eightpoint
Mathematical Institute of the Hungarian Academy of Sciences,\newline
P.O.B. 127, H--1364, Budapest, Hungary}
\bigskip
\rightline{\vbox{\hsize=11.5cm \noindent
{\bf Summary.} Let a smooth curve be given by a function $r=f(\f)$ in
polar
coordinate system in the plane, and let $R$ be a uniformly distributed
random variable on the interval $[a_1L,a_2L]$ with some $a_2>a_1>0$
and a large $L>0$. Ya.\ G. Sinai has conjectured that given some real
numbers $c_2>c_1$, the number
of lattice points in the domain between the curves
$\left(R+\dfrac{c_1}R\right)f(\f)$ and
$\left(R+\dfrac{c_2}R\right)f(\f)$ is asymptotically Poisson
distributed for ``good'' functions $f(\cdot)$. We
cannot prove this conjecture, but we show that if a probability measure
with some nice properties is given on the space of smooth functions,
then almost all functions with respect to this measure satisfy Sinai's
conjecture. This is an improvement of an earlier result of Sinai ~[9],
and actually the proof also contains many ideas of that paper.}}
\beginsection 1. Introduction
Let us consider a curve on the two-dimensional Euclidean
space $\r$ which is given by the equation $r=f(\f)$, $0\le\f\le\th$, with
some $0<\th\le 2\pi$ in polar coordinate system, where $f(\cdot)>0$ is a
continuous Lipschitz one function on $[0,\th]$. Given some
non-zero point $x=(x_1,x_2)\in\r$ let $|x|=\sqrt{x^2_1+x_2^2}$ denote
its absolute value and $\f(x)$
the angle between the vectors $(1,0)$ and $x=(x_1,x_2)$. Let us
fix two real numbers $c_2>c_1$ and define for all sufficiently
large $R>0$ (we need that $R+\dfrac{c_1}R>0$) the domain
$$
\aligned
\OR =\OR(f)
&=\left\{x\in\r,\;0\le\f((x))\le\th,
\vphantom{\left(R+\frac{c_1}R\right)f(\f(x))}\right.\\
&\qquad\left.\left(R+\frac{c_1}R\right)f(\f(x))
<|x|<\left(R+\frac{c_2}R\right)f(\f(x))\right\}.
\endaligned
\tag1.1
$$
Simple calculation shows that the area of the domain $\OR$ is
$$
\left(1+\frac{c_1+c_2}{2R^2}\right)(c_2-c_1)\int_0^\th f^2(\f)\, d\f.
$$
We are interested in the number of lattice points
in $\OR$, i.e. in the cardinality of the set $\OR\cap\z$, where $\z$
denotes the points in $\r$ with integer coordinates,
if $R$ is a uniformly distributed random variable in an interval $[a_1L,
a_2L]$. Here $a_2>a_1>0$
are fixed positive numbers, and the parameter $L>0$ is large. More
precisely, we are interested in the limiting behaviour of the number of
lattice points in this domain if $L\to\infty$. Ya.\ G. Sinai has
formulated
the conjecture that for ``typical'' nice curves the distribution of the
cardinality of this set tends to the Poisson
distribution with parameter $\lambda=(c_2-c_1)\int_0^\th f^2(\f)\,d\f$.
There is no explicitly defined curve for which we can verify the above
conjecture. On the other hand, we can show that if a probability measure
is given on the set of continuous Lipschitz one functions with some nice
properties, then almost all functions with respect to this measure
satisfy Sinai's conjecture. This is a strengthening of a result of Sinai
in paper~[9], and actually the proof also depends heavily on the ideas
of this
paper. To formulate our result first we introduce the following notion:
\proclaim{\bf Definition of Property A}\it A probability measure $P$ on
the set of continuous Lipschitz one functions $f(\f)$, $0<\f<\th$,
satisfies Property A if
\item{1.)} There are some positive numbers $00$ such
that almost all functions $f(\f)$, $0<\f<\th$, with respect to the
measure
$P$ satisfy the inequality $b_10$ and $D_k>0$ depending only on
~$k$.
\endproclaim
We shall prove the following
\proclaim{\bf Theorem}\it Let $P$ be a probability measure with
Property~A
on the space of continuous functions on the interval
$[0,\th]$, and let $R$ be a uniformly distributed random variable on the
interval $[a_1L, a_2L]$ with some $a_2>a_1>0$ and a
parameter $L>0$. Given
some function $f(\cdot)$ on $[0,\th]$, consider the set $\OR(f)$
defined by
formula (1.1). Let $\xi_L=\xi_L(f)$ denote the number of lattice points
in
$\OR(f)$, i.e.\ the cardinality of the set $\OR(f)\cap\z$. Then for
almost
all functions ~$f$ with respect to the measure $P$ the random variables
$\xi_L$ tend in distribution to the Poisson distribution with
parameter $\lambda=(c_2-c_1)\int_0^\th f^2(\f)\,d\f$ as $L\to\infty$.
\endproclaim
Sinai proved in [9] a weaker version of this result. He proved that
if the function $f(\cdot)$ is chosen randomly and independently of the
radius $R$
with respect to some probability distribution with nice properties, then
the distribution of the number of lattice points tends
to a mixture of Poisson distributions with different parameters. Sinai
expressed the conditions on
the distribution of the functions $f$ in a form slightly different from
ours,
with the help of certain conditional density functions. Let us remark
that our conditions are less restrictive, and this is important in such
applications as for instance the example given in Section 2.
Most ideas of this work came from paper [9]. The most important step of
the proof, the formulation of the Proposition can be traced in a hidden
way in ~[9], and even the Proposition's proof contains several ideas of
that paper. The proof of the Proposition is based on the estimate of the
second moments of a certain random variable. For Sinai, to prove his
weaker result, it was enough
to estimate the first moment of a similar random variable. But he also
remarked that the higher moments of such variables can be estimated
similarly, although some additional technical difficulties appear.
Problems about the number of lattice points have been investigated for a
long time in number theory and probabilistic number theory. See e.g.\ [8]
for a classical treatment, [6] for the investigation of number of
lattice points in a large circle with random centre or [5] for a modern
treatment of the problem. Recently, this problem got even greater
importance because of some questions in physics. We are interested in
the behaviour of the spectrum of an operator in a quantum system. In
particular, we would like to understand whether the quantization of a
completely integrable classical mechanical system (which has nice
trajectories) gives a different type
of spectrum than that of a hyperbolic system with chaotic behaviour.
There are certain conjectures about this problem. It is believed that
the local behaviour of the spectrum is similar to the
realizations of a Poisson process in the case of the quantum
counterpart of a ``typical'' completely integrable system, and
the spectrum satisfies Wigner's semicircle law in the case of
quantization of hyperbolic systems. Actually, the situation is much more
complex. We do not want
to discuss this problem in detail, because this is not the subject of
the present paper, and we are rather far from its good understanding.
The investigation of the spectrum of certain quantum systems leads to
the problem about the number of lattice points in a given domain. An
example for completely integrable
systems whose quantization leads to such a problem is the free motion
of a particle on a periodic rotation surface. (More precisely, we
make a factorization of the surface with respect to the period. In
such a way we get the motion of a particle on a compact surface
resembling
to a torus.) The quantization of this model leads to the problem
about the eigenvalues of the Laplace--Beltrami operator on this surface.
These eigenvalues can be calculated with a sufficiently good accuracy by
means of the so-called quasi-classical approximation. (See papers~[2]
and~[10]). Then the problem about the number of eigenvalues in an
interval leads to the
problem of counting the number of lattice points in a domain in $\Bbb
R^2$ whose boundary is determined through the rotation surface and the
interval. (See~[2] or~[10]). We are interested both in the local and
global behaviour of the spectrum.
%We also get such problems which were, at least
%by the knowledge of the author of the present paper, not investigated
%by classical number theory.
The local behaviour of the spectrum,
the number of eigenvalues in a randomly chosen interval of fixed constant
length, leads to the probabilistic problem investigated in this paper.
This is the reason why Sinai formulated his conjecture. We
cannot prove this conjecture for any explicitly given curve. Our aim
was
to show that it holds for typical curves. In the special case of circle,
which corresponds to the spectrum of the Laplace operator on the torus
$[0,1]\times[0,1]$, this conjecture does not hold. (See Problem~1 in
Section~2.) Sinai's conjecture implies that the number of
eigenvalues of the Laplace operator on a generic rotation surface
is asymptotically Poissonian
in a randomly chosen interval of constant length.
The global behaviour of the spectrum, the
number of eigenvalues in a large interval $[0,L]$ leads to problems more
intensively investigated
in classical number theory, namely to the number of lattice points in a
large domain. Here again, we are interested in the behaviour of generic
curves. An investigation in this direction is done in paper ~[7].
Other physical models lead to other number theoretical problems. We
mention in this direction paper~[3] and the references in it, where the
physical problem the authors considered led to the investigation of the
number of lattice points in a large circle with random
center. This problem was studied by means of computer simulation. Both
the local and global behaviour of the spectrum was investigated. The
computer simulations indicate a Poissonian local behaviour of this model
too. A good description of the global behaviour of the spectrum of this
model is still an open question.
The theorem formulated above also has the following generalization:
\proclaim{\bf Theorem$'$}\it For all $m=(m_1,m_2)\in\z$
define, with the help of a function $f$ and a
random variable $R$, the (random) mapping
$$
F=F(R,f)\: m\to\left(\f(m),R\left(\frac{|m|}{f(\f(m))}-R\right)\right),
\quad m\in\z
$$
and the random field
$$
\Cal P=\{F((m_1,m_2));\;(m_1, m_2)\in\z\}.
$$
If $R$ is a uniformly distributed random variable on an interval
$[a_1L,a_2L]$, then for almost all functions $f$ with respect to a
probability measure $P$ with Property ~A the finite dimensional
distributions of the random field $\Cal P$ tend to that of a Poisson
process on $[0,\th]\times [-\infty,\infty]$ with counting measure
$f^2(\f)\,d\f\,dx$ as $L\to\infty$. This convergence means that for
any $K\ge1$ and disjoint rectangles $[d_j,\bar d_j]\times[e_j,\bar
e_j]\subset[0,\th]\times[-\infty,\infty]$, $j=1,\dots,K$, the number of
points in these rectangles tend to independent Poissonian random
variables with parameters $\lambda_j=(\bar e_j-e_j)\int_{d_j}^{\bar
d_j}f^2(\f)\,d\f$, $j=1,\dots, K$.
\endproclaim
Theorem\,$'$ states in particular that the distribution of the number of
lattice points which are mapped by the transformation $F$ to the
rectangle $[0,\th]\times[c_1,c_2]$ tends to the Poisson distribution
with parameter $\lambda=(c_2-c_1)\int_0^\th f^2(\f)\,d\f$. In such a way
it contains the statement of the Theorem as a special case. The proof
of Theorem\,$'$ is based on the same ideas as the proof of the Theorem.
But since it is technically complicated we omit it.
\beginsection 2. Some remarks about the Theorem
The conditions of the Theorem can be slightly weakened. The following
version of the Theorem may be useful in certain applications.
\proclaim{\bf Stronger version of the Theorem}
\it The Theorem and Theorem$\,'$ remain valid if Part ~1.) of Property
A is replaced by the following weaker condition ~1.$'$)
\item{1.$'$)} There are some positive numbers $0x\right)\le
Ke^{-\lambda x} \tag2.1
$$
for all $x>0$ with some $K>0$ and $\lambda>0$.
\endproclaim
At the end of this paper we briefly explain the modifications
needed in the proof of this stronger version of the Theorem.
We discuss the content of Property A and give the following example:
\proclaim{\bf Remark 1}\it Let $W(t)=W(t,\omega)$ be a Wiener process, and
define the process $B(\f)=B(\f,\omega)=\int_0^\f W(t,\omega)\,dt$.
Then the Theorem holds for almost all trajectories of the process
$B(\f,\omega)$ if a
sufficiently big constant is added to it. More explicitly,
$B(\f,\omega)+C(\omega)$ satisfies
the Theorem if $C(\omega)>-\min_{0\le\f\le\theta}B(\f,\omega)+c$ with
some positive constant $c$, i.e. the distribution of the
number of lattice points in $\Bbb O_R(B(\f,\omega)+C(\omega))$
tends to the Poisson distribution with parameter
$$
(c_2-c_1)\int
(B(\f,\omega)+C(\omega))^2\,d\f
$$
if $R$ is uniformly distributed in the
interval $[a_1L,a_2 L]$ with some $a_2>a_1>0$, and $L\to \infty$.
\endproclaim
We briefly explain the proof of Remark 1 with the help of the Stronger
version of the Theorem.
Introduce the sigma algebra $\Cal F_{\f}=\{\Cal F(W(s)),\,s\le\f\}$. Then
the process $(B(\f,\omega),\Cal F_{\f})$ is a Gaussian Markov process.
We show that $B(\f,\omega)$ satisfies Part~2) of Property A with
$\tau=\frac32$. For this aim fix the parameters $\f_1,\dots,\f_k$ and
the values $W(\f_1)=y_1$, \dots, $W(\f_k)=y_k$. We calculate the
conditional density function of the
random vector ($B(\f_1),\dots,B(\f_k)$) under this condition. It equals
$$
p_k^{(y_1,\dots,y_k)}(x_1,\dots,x_k|\f_1,\dots,\f_k)
=p_1^{y_1}(x_1|\f_1)\prod_{i=2}^k
p^{(y_{i-1},y_i)}(x_i|\f_{i-1},\f_i,x_{i-1}),
$$
where $p^{(y_{i-1},y_i)}(x_i|\f_{i-1},\f_i,x_{i-1})$ is the
conditional density
function of $B(\f_i)$ under the condition $B(\f_{i-1})=x_{i-1}$,
$W(\f_{i-1})=y_{i-1}$ and $W(\f_{i})=y_{i}$, and
$p_1^{y_1}(x_1|\f_1)$ is the conditional density of $B(\f_1)$
under the condition $W(\f_1)=y_1$. These conditional density functions
are Gaussian with expectation $x_{i-1}+(\f_i-\f_{i-1})\dfrac
{y_{i-1}+y_i}2$ and variance
$$
\align
D(\f_{i-1},\f_{i})&=\int_{\f_{i-1}}^{\f_i}\int_{\f_{i-1}}^{\f_i}(\min
(s,t)-\f_{i-1})(\f_i-\max(s,t))\,ds\,dt\\
&=O\left(|\f_i-\f_{i-1}|^3\right)
\endalign
$$
for $i\ge2$, and the density of $p_1^{y_1}(x_1|\f_1)$ can be written
down similarly.
Part~2) of Property A can be proved with the help of the above formulas
after integration with respect to the conditions $W(\f_s)=y_s$,
$s=1,\dots,k$.
(The appearance of the parameter $\tau=3/2$ can also be explained
with the help of the observation that $B(\f)$ and $T^{-3/2}B(T\f)$
have the same distribution.) But the distribution
of $B(\f)$ does not satisfy Part 1) of Property A, since although the
derivative $B'(\f,\omega)=W(\f,\omega)$ is bounded, this bound
depends on $\omega$.
A natural way to overcome this difficulty is to make a conditioning of
the process $W(t)$ by the condition $\left\{\sup |W(t)|0$ or to consider the process $\bar W(t)$ which is the reflected
Wiener process $W(t)$ with reflective barriers $-A$ and $A$, then to
integrate this process
and apply the Theorem for the integrated process, (more precisely for
the integrated process $+A'$, with some $A'>A$). Then we can exploit
that
the probability of the event that this new process agrees with $B(\f)$
tends to 1 as
$A\to\infty$. To carry out this program we should prove that the
distribution of this new process satisfies Property ~A. This statement
is probably
true, but we cannot check Part 2b) of Property ~A. Hence we choose a
slightly different approach.
Define the function $h_A(t)$,
$$
h_A(t)=\left\{
\aligned
&t-4kA\quad\text{if }(4k-1)A\le t<(4k+1)A,\quad k=0,\pm1,\dots\\
&(4k+2)A-t\quad\text{if } (4k+1)A\le t<(4k+3)A,\quad k=0,\pm1,\dots
\endaligned\right.
$$
and the random process $B_1(\f)=h_A(B(\f))$. (The process $B_1(\f)$ is
actually the process $B(\f)$ after reflection with reflective barriers
$-A$ and $A$.)
Then the process $B_1(\f)+A'$ with $A'>A$ satisfies Property A if Part
1) is replaced by its weaker version Part $1')$. Part 2) of Property A
can be checked in this case, since the density function appearing in it
can be written down explicitly. It is not difficult to show that Part
$1')$ of Property ~A holds, since
$$
\sup_{0\le\f_1<\f_2\le\theta}\frac{|B_1(\f_1)-B_1
(\f_2)|}{|\f_1-\f_2|}\le \sup_{0\le \f\le\theta}|W(t)|.
$$
Then we get the proof of Remark 1 by letting $A$ tend to infinity.
\medskip
Although the technically most difficult
part in the proof of Remark 1 was to check Part 2b), actually the most
restrictive
condition of Property A is Part 2a), especially the restriction
$\tau<2$. It has
the following content. For fixed $0\le\f_1<\f_2<\cdots<\f_k$ the density
function of the random vector $(f(\f_1),\dots, f(\f_k))$ is a bounded
function with a bound that may depend on $\f_1$,\dots, $\f_k$.
Since
$b_10$ and integer $k>\dfrac{\tau+1}{2-\tau}$, define the event
$$
\align
A_h&=A_h(k,\f,K)\\&=\left\{\sup_{1\le j\le k}\left|\frac
{f(\f+jh)+f(\f+(j+2)h)-2f(\f+(j+1)h))}{h^2}\right|0$, hence relation $(2.2)$ and
Remark 2 holds.
\medskip
We finish these remarks by posing two open problems.
\demo{\it Problem 1} Give explicit curves which satisfy the Theorem. In
particular, let us consider the ellipses given by the equations
$x^2+ay^2=1$ with some $a>0$. Is it true that these ellipses
satisfy Sinai's conjecture for almost all $a>0$? The circle, i.e.\ the
ellipsis
with $a=1$ does not satisfy it. In this case $f(\f)=1$, and the
problem
leads to the following number theoretical question. Let $r(n)$ denote
the number of integer solutions of the equation $k^2+l^2=n$. What
can be said about the distribution of the number theoretical function
$r(n)$?
For the sake of simplicity, let us consider only the case
when $c_2-c_1<1/2$. Then the interval $\left[R+\dfrac{c_1}R,
R+\dfrac{c_2}R\right]$ contains the square root of only one integer
~$n$, and the number
of lattice points in $\OR$ equals $r(n)$ with this integer ~$n$. On the
other hand, the probability that this interval contains the square
root of a fixed integer ~$n$ is less than $const.L^{-2}$ if $R$ is
uniformly distributed on the interval $[a_1L, a_2L]$. The behaviour of
the function
$r(n)$ is fairly well-known. (See {e.g.}~[4].) For our purposes it is
enough to
know that $r(n)=0$ if the prime factorization of $n$ contains a prime
factor of the form $4k+3$ on an odd power. We also know that the density
of the integers
satisfying this property is ~one. The above facts imply that in
the case of circle the probability that $\OR$ contains no
lattice point tends to one as $L\to\infty$. A more detailed analysis
also shows that the conditional
probability of the event that the number of lattice points in $\OR$
tends to infinity is almost one under the condition that $\OR$ is not
empty.
On the other hand, some computer simulations
suggest that this is a degenerate case, and almost all ellipses satisfy
Sinai's conjecture (see~[1]).
\enddemo
\demo{\it Problem 2} Prove the Theorem for almost all functions with
respect to such probability measures which contain very smooth (e.g.\
analytic) functions with positive probability.
\beginsection 3. Reduction of the proof of the Theorem
In the proof we apply a version of the method of moments.
Let us first show that if a sequence of random variables $\xi_L$
satisfies the relation
$$
E\binom {\xi_L} k \to \frac {\lambda^k} {k!}\quad\text{for all }
k=1,2,\dots,\tag3.1
$$
as $L\to \infty$, then this sequence tends in distribution to the
Poisson distribution with parameter $\lambda$. To prove this, let us
observe that if $\xi $ is a Poisson distributed random variable with
parameter $\lambda$, then
$$
E\binom \xi k=\sum_{n=k}^\infty \frac
{\lambda^n}{n!}e^{-\lambda}\binom
nk=e^{-\lambda}\sum_{n=k}^\infty \frac{\lambda^n}{k!(n-k)!}
=\frac
{\lambda^k}{k!}e^{-\lambda}\sum_{n=0}^\infty\frac{\lambda^n}{n!}
=\frac{\lambda^k}{k!}.
$$
The moment $E\xi_L^k$ can be expressed as a linear combination of the
quantities $E\dbinom {\xi_L}p$, $0\le p\le k$. Hence if formula (3.1)
holds, then $E\xi_L^k$ tends to the $k$-th moment of a Poisson
distributed random variable with parameter $\lambda$. But if all moments
of a sequence of random variables converge to the moments of a
Poisson distribution with parameter $\lambda$, then this sequence
converges in distribution to the Poisson law with parameter $\lambda$.
We have chosen this approach, because the following identity holds:
For all functions $f$
$$
\binom {\xi_L(f,R)}k=\sum_{\{m_1,\dots,m_k\}\in\Bbb
Z^{2k}}\chi\left(\{m_s\in \Bbb O_R(f),\;\text{for all
}s=1,\dots,k\}\right). $$
Here $\chi(A)$ denotes the indicator function of the event $A$. The
summation is taken for such $k$-tuples of lattice points where all
points $m_1$,\dots, $m_k$ are different, and two $k$-tuples are
identified if they contain the same lattice points, only in different
order. Hence, for all functions ~$f$
$$
E\binom {\xi_L(f,R)}k=\sum_{\{m_1,\dots,m_k\}\in\Bbb Z^{2k}}
E\chi\left(\{m_s\in \Bbb
O_R(f)\text{ for all }s=1,\dots,k\}\right),\tag3.2
$$
where expectation is taken for a random variable $R$ which is
uniformly distributed in the interval $[a_1L,a_2L]$. We can handle the
terms in the sum (3.2), but only in the case when the differences
between
the angles $\f(m_s)$, $s=1,\dots,k$, are not too small. Hence, first we
reduce the proof of the Theorem to the investigation of a sum where
only
such terms appear. To formulate this statement more explicitly we need
some notations. First we explain the strategy of our proof.
We shall split the domain $\OR(f)$ by means of small
sectors $\DD_j$ and put even smaller buffer zones $\C_j$ between them.
We shall prove that the contribution of the sectors $\C_j$ is
negligible.
This is the content of Lemma~1. We can show with the help of Lemma~2
that the probability of the event that there is some $\DD_j$ which
contains two lattice points in $\OR(f)$ tends to zero. This is a
rareness type argument, typical in the proof of Poissonian limit
theorems. In our approach however, we need a stronger statement.
We shall drop all $k$-tuples which have two points in the same
sector $\DD_j$ with some $j$ and count only the remaining
$k$-tuples all of whose elements are in $\OR(f)$. We show that
only a negligible error is committed in this way. This is the content of
formula (3.3), and the reduction of
the Theorem to this statement is done by means of Lemma~3. The hard step
in the proof of the Theorem is the verification of formula~(3.3). It
states that some
moment type expression behaves so as if the number of lattice points of
$\OR$ in different sectors $\DD_j$ were independent. There is no such
independence in our model, but we shall prove a Proposition which can be
considered
as a law of large numbers type result (such results are related to some
sort of independence) and which implies the Theorem.
In these Lemmas and in the Proposition the random radius $R$
does not appear. These results formulate some properties which
almost all functions with respect to a probability measure with
Property A satisfy. Lemma ~1 is exceptional in this respect.
(The random radius $R$ appears in it, but it formulates a property
which all positive continouus Lipschitz one functions satisfy.)
We shall show that a function with these properties satisfies
Sinai's conjecture.
Put $$
0=\f_0(n)<\f_1(n)<\cdots<\f_{2p+1}(n)\le\th<\f_{2p+2}(n), \quad
(p=p(n)) $$ in such a way that $$ \f_{2j+1}-\f_{2j}=(\log
n)^{-\a},\quad j=0,1,\dots, p, $$ $$ \align
\f_{2j+2}-\f_{2j+1}&=(\log n)^{-\beta},\quad j=0,1,\dots, p-1,\\
&\qquad\text{and}\quad \f_{2p+2}-\f_{2p+1}<(\log n)^{-\alpha}
\endalign $$ with some $\a<\beta$ and $\a>\dfrac2{2-\tau}$, where
$\tau$ is the same number which appears in Part 2a) of Property A.
(For the sake of simpler notations in the sequel we denote by
$\log$ logarithm with base ~2.) Clearly, $p(n)<2\pi(\log n)^\a$.
Define also the sets $$ \aligned {\C}_j={\C}_j(n)&=\{x\in\r,\quad
An<|x|0$ as sufficiently small, $B>0$ as sufficiently large
fixed constants.
For all continuous Lipschitz one functions $f(\cdot)$, integers
$k=1,2,\dots,$ and $n>0$ we define the random variable
(depending on $R$)
$$
\align
\zeta_n(k,f,R)&=\sum_{0\le j_1<\cdots0$ and function $f(\cdot)$ define the sets $A_n(f)$,
$n=1,2,\dots$,
$$
\aligned
A_n(f)=\biggl\{(m,\bar m),&\quad m\in\z, \;\bar m\in \z,\;m\neq \bar m,
\\ & m\in\DD_j(n),\;\bar
m\in\DD_j(n)\text{ for some }0\le j\le p(n),\\
&\qquad\qquad\qquad\quad\left|\frac{|m|}{f(\f(m))}-\frac{|\bar m|}{f(\f(\bar m))}
\right|<\frac Kn \biggr\}.
\endaligned
$$
For almost all functions $f(\cdot)$ with respect to the measure $P$ the
relation
$$
\left|A_{2^n}(f)\right|<\frac {2^{2n}}{n^{\a(2-\tau)/2}}\quad\text{if
}n>n(f)
$$
holds.
\endproclaim
The following Lemma 3 is a generalization of Lemma 2.
\proclaim{\bf Lemma 3}\it Let the conditions of Lemma 2 be satisfied.
For arbitrary $K>0$, $k=0,1,\dots$ and function $f(\cdot)$ define the
sets $B_{n,k}(f)$, $n=1,2,\dots$
$$
\aligned
B_{n,k}(f)&=\biggl\{(m,\bar m,m_1,\dots, m_k),\; m\in \z,\;\bar
m\in\z,\;m_s\in \z\text { for }1\le s\le k,\\
&\qquad\qquad m\in\DD_j(n),\;\bar
m\in\DD_j(n),\text{ with some }0\le j\le p(n),\\
&\qquad\qquad \text{all lattice points $m$, $\bar m$ and $m_s$,
$s=1,\dots,k$ are different,}\\
&\qquad\qquad\left|\frac{|m|}{f(\f(m))}-\frac{|\bar m|}{f(\f(\bar m))}
\right|<\frac Kn ,\\
&\qquad\qquad\left|\frac{|m|}{f(\f(m))}-\frac{|m_s|}{f(\f( m_s))}
\right|<\frac Kn ,\quad 1\le s\le k\biggr\} .
\endaligned
$$
For almost all functions $f(\cdot)$ with respect to the measure $P$ the
relation
$$
\left|B_{2^n,k}(f)\right|n(f,k)
$$
holds with some $C_k>0$.
\endproclaim
Given some $L>1$, introduce the integer $n$ such that $2^n\le
L<2^{n+1}$, and define the random variables
$$
\xi^{(1)}_L =\xi^{(1)}_L(f,R)=\text{the number of }m\in \z \text{ such
that }m\in \Bbb O_R(f)\cap \bigcup_{j=0}^{p(2^n)}\DD_j(2^n)
$$
and
$$
\align
\xi^{(2)}_L =\xi^{(2)}_L(f,R)=&\, \text{the number of indices }j\\
& \text{ such that }\exists \; m\in\z\cap \Bbb O_R(f)\cap \DD_j
(2^n).
\endalign
$$
We claim that if the function $f(\cdot)$ is chosen randomly with respect
to a probability measure $P$ with Property A, then
$$
\xi^{(1)}_L(f,R)-\xi_L(f,R)\Rightarrow 0\quad \text{as }L\to \infty,
\tag3.4
$$
and
$$
\xi^{(2)}_L(f,R)-\xi_L(f,R)\Rightarrow 0\quad \text{as }L\to \infty.
\tag$3.4'$
$$
for almost all $f(\cdot)$.
If $m\in\Bbb O_R$, then $m\in\bigcup\limits_{j=0}^{p(2^n)}\C_j(2^n)\cup
\bigcup\limits_{j=0}^{p(2^n)}\DD_j(2^n)$ if the constants $A$ and $B$ in
the definition of $\C_j$ and $\DD_j$ are appropriately chosen, since
$a_1Ln(f,k)$.
\beginsection 4. Proof of the Theorem with the help of some Lemmas
The hardest part of the proof is the justification of formula (3.3). It
is based on a Proposition, which will be formulated below. To do this,
first we introduce some notations. Define the intervals
$$
\align
I_p(f,m,\de)=&\left[\frac{p\de}{|m|}f(\f(m)),
\frac{(p+1)\de}{|m|}f(\f(m))\right],\\
&\qquad
\de=\de_n=(\log n)^{-\eta},\;p=0,\pm1,\dots,\pm\frac D\de,\quad m\in\z
\endalign
$$
and
$$
\tilde{I}_z=\tilde{I}_z(n)=\left[\frac{zn}{(\log n)^{\tilde\eta}},
\frac{(z+1)n}{(\log n)^{\tilde\eta}}\right],\quad
\bar A(\log n)^{\tilde\eta}0$ and $\tilde \eta>0$ are also appropriately chosen. We
define with their help the sets
$$
\aligned
&\Cal M_{k,n}(f,j_1,\dots,j_k,p_{2},\dots,p_{k},z) \\
&\qquad=\biggl\{(m_1,\dots,m_k),\quad m_s\in\DD_{j_s}\cap\z,\quad
s=1,\dots,k,\\
&\qquad\qquad |m_1|\in \tilde {I}_{z}(n), \\
&\qquad\qquad
\frac{|m_s|}{f(\f(m_s))}-\frac{|m_1|}{f(\f(m_1))}\in
I_{p_{s}}(f,m_1,\de_n), \; s=2,\dots ,k,
\biggr\}.
\endaligned \tag4.1
$$
Put
$$
\align
S_{n,k}=\biggl\{(j_1,\dots,j_k,p_{2},\dots,p_{k},z),
\quad &0\le j_1L_0$ with some fixed
threshold $L_0$. Therefore (4.3) holds.
If $m_1\in \Bbb O_R(f)$, then
$$
\dfrac1R=\dfrac{f(\f(|m_1|))}{|m_1|}+O\left(\dfrac{1}{n^2}\right),\tag
4.5
$$
and if (4.2) holds, then
for all
$$
(m_1,\dots, m_k)\in\Cal M_{k,n}(j_1,\dots,j_k,p_2,\dots,p_k,z)
$$
the relations
$$
\align
\frac{f(\f(m_1))}{|m_1|}&=\frac{(\log n)^{\tilde \eta}f(\f_{2j_1})}
{zn}+r_1 \tag$4.5'$\\
\frac{|m_s|}{f(\f(m_s))}&=\frac{|m_1|}{f(\f(m_1))}+
\frac{p_s\delta_n (\log n)^{\tilde \eta }f(\f_{2j_1})}{zn}+r_s
\quad\text{for }2\le s\le k \tag$4.5''$
\endalign
$$
hold with some $r_s$, $1\le s\le k$, less than $const.\dfrac{(\log
n)^{-\omega}}n$.
Hence, if $m_s\in \Bbb O_R(f)$, \ $s=1$, \dots, $k$, then by (4.4)
$$
R+\frac {c_1(\log n)^{\tilde \eta}f(\f_{2j_1})}{zn}+\bar r_1\le
\frac {|m_1|}{f(\f(m_1))} \le
R+\frac {c_2(\log n)^{\tilde \eta}f(\f_{2j_1})}{zn}+\tilde r_1
$$
and
$$
\align
R+\frac {c_1(\log n)^{\tilde \eta}f(\f_{2j_1})}{zn}+\bar r_s&\le
\frac {|m_1|}{f(\f(m_1))}+\frac {p_s\delta_n(\log n)^{\tilde
\eta}f(\f_{2j_1})}{zn}\\
&\qquad\qquad \le
R+\frac {c_2(\log n)^{\tilde \eta}f(\f_{2j_1})}{zn} +\tilde r_s
\endalign
$$
for $2\le s\le k$
with $\bar r_s \frac{(\log n)^{\tilde
\eta}f(\f_{2j_1})}{zn}
\max\{-c_2,\max_{2\le s\le k}-c_2+p_s\delta_n\}-K\frac
{(\log n)^{-\omega}}n\\
B(m_1,\dots, m_k) &<\frac{(\log n)^{\tilde \eta}f(\f_{2j_1})
}{zn} \min\{-c_1,\min_{2\le s\le k}-c_1+p_s\delta_n\}+K
\frac{(\log n)^{-\omega}}n
\endaligned
$$
with an appropriate $K>0$. To complete the proof of ($4.3'$) we have
to show that (4.4) holds if
$
(m_1,\dots,m_k)\in\Cal M_{k,n}(j_1,\dots,j_k,p_2,\dots,p_k,z)
$
and
$$
\aligned
&\frac{(\log n)^{\tilde \eta}f(\f_{2j_1})}{zn}
\max\{-c_2,\max_{2\le s\le k}-c_2+p_s\delta_n\}+K
\frac{(\log n)^{-\omega}}n\\
&\qquad0$. Under these conditions
relations (4.5)--$(4.5'')$
hold again, and they imply together with the last relation that
$$
\align
R+\frac{c_1}R&<\frac{|m_1|}{f(\f(m_1))}+
\frac{(\log n)^{\tilde \eta}f(\f_{2j_1})
}{zn}(-c_1+p_s\delta_n)\\
&\qquad+c_1
\frac{(\log n)^{\tilde \eta}f(\f_{2j_1})}{zn}
-\frac K2\frac
{(\log n)^{-\omega}}n< \frac{|m_s|}{f(\f(m_s))}
\endalign
$$
for all $s=1,\dots,k$. The other inequality in relation (4.4) can
be proved similarly.
We can write
$$
E_L(k,f)=
\sum_{0\le j_1
\frac{a_2Ln^{\tilde \eta}}{2^n}f(\f_{2j_1})(1+Kn^{-\omega})\\
&\text{or
}z<\frac{a_1Ln^{\tilde \eta}} {2^n}f(\f_{2j_1})(1-Kn^{-\omega}),
\endaligned
\tag4.7
$$
with some appropriate $K>0$, since by formulas (4.3) and $(4.3')$ (with
their application for $2^n$) the event
$m_s\in \Bbb O_R(f)$ can occur in this case only for such $R$ which are
outside of the
interval $[a_1L, a_2L]$. To estimate $B_L$ in other cases introduce
the quantities
$$
\align
&\Cal
K_n^+(j_1,\dots,j_k,p_2,\dots,p_k,z)\\&\qquad\qquad=\!\!\!\!\!\!\!\!\!
\sup_{(m_1,\dots,m_k)\in\Cal M_{k,2^n}(j_1,\dots,j_k,p_2,
\dots,p_k,z)} \!\!\!\!\!\!\!\!\!
E\{\chi(m_s\in \Bbb O_R(f)\text { for all }s=1,\dots,k\},\\
&\Cal K_n^-(j_1,\dots,j_k,p_2,\dots,p_k,z)\\&\qquad\qquad=
\!\!\!\!\!\!\!\!\! \inf_{(m_1,\dots,m_k)\in\Cal
M_{k,2^n}(j_1,\dots,j_k,p_2, \dots,p_k,z)} \!\!\!\!\!\!\!\!\!
E\{\chi(m_s\in \Bbb O_R(f)\text { for all }s=1,\dots,k\},
\endalign
$$
Because of the Proposition we have
$$
\aligned
(1-\e_n)&z2^{2n}n^{-k\a-(k-1)\eta-2\tilde\eta}
\prod_{s=2}^kf^2(\f_{2j_s})\\
&\qquad\sum\Sb |p_{s}|< D {n^\eta}\\ s=2,\dots,k\endSb
\Cal K_n^-(j_1,\dots,j_k,p_2,\dots,p_k,z)
\le
B_L(f,j_1,\dots, j_k,z)\\
&\qquad\qquad
\le (1+\e_n)z2^{2n}n^{-k\a-(k-1)\eta-2\tilde\eta}
\prod_{s=2}^kf^2(\f_{2j_s})\\
&\qquad\qquad\qquad
\sum\Sb |p_{s}|< D {n^\eta}\\ s=2,\dots,k\endSb
\Cal K_n^+(j_1,\dots,j_k,p_2,\dots,p_k,z)
\endaligned\tag4.8
$$
for almost all functions $f(\cdot)$ with respect to a probability
measure with Property ~A, where $\e_n\to 0$ uniformly for
$(j_1,\dots,j_k,p_2,\dots,p_k,z)\in S_{k,2^n}$ as $n\to\infty$.
Introduce also the following notation: Given some interval
$A=[a,b]$,
integers $p_2$,\dots, $p_k$ and some number $0<\Delta<1$,
define the interval
$$
A(p_2,\dots,p_k,\Delta)=[a,b]\cap\bigcap_{s=2}^k[a+p_s \Delta(b-a),
b+p_s \Delta(b-a)],
$$
and let $\ell (A(p_2,\dots,p_k,\Delta))$ denote its length.
It follows from formula (4.3) and ($4.3'$) (with their application for
$2^n$) that
$$
\aligned
&\Cal K_n^+(j_1,\dots,j_k,p_2,\dots,p_k,z)=\frac1{(a_2-a_1)L}
\left[\ell(A(p_2,\dots,p_k,\Delta))+O(2^{-n}n^{-\omega})\right]\\
&\Cal K_n^-(j_1,\dots,j_k,p_2,\dots,p_k,z)=\frac1{(a_2-a_1)L}
\left[\ell(A(p_2,\dots,p_k,\Delta))+O(2^{-n}n^{-\omega})\right]
\endaligned \tag4.9
$$
with $A=[a,b]$, $a=-c_2\dfrac{f(\f_{2j_1}) n^{\tilde\eta}}{z2^n}$,
$b=-c_1\dfrac{f(\f_{2j_1}) n^{\tilde\eta}}{z2^n}$ and
$\Delta=\dfrac{n^{-\eta}}{c_2-c_1}$ if
$$
(m_1,\dots,m_k)\in\Cal M_{k,2^n}(j_1,\dots,j_k,p_2,\dots,p_k,z),
$$
and if
$
\dfrac{a_1Ln^{\tilde \eta}} {2^n}f(\f_{2j_1})(1+Kn^{-\omega})< z<
\dfrac{a_2Ln^{\tilde \eta}} {2^n}f(\f_{2j_1})(1-Kn^{-\omega})$.
We have to observe that in this case the interval of $R$ for which
$m_s\in \Bbb O_R(f)$, $s=1,\dots,k$, is contained in $[a_1L,a_2L]$.
Moreover, the right-hand side of the first line in formula (4.9) is an
upper bound for $\Cal K_n^+$ for arbitrary $z$.
We need the following Lemma 4, which is a version of Lemma 3 in [9].
\proclaim{\bf Lemma 4}\it Let an interval $A=[a,b]$ and some
number $0<\Delta<1$ be given. Then, using the notation introduced
above, the relation
$$
\sum\Sb -\infty \frac{2^{2n}}{n^{\a(2-\tau)/2}}\right)\le
const. n^{-\a(2-\tau)/2}.
$$
Since $\sum n^{-\a(2-\tau)/2}<\infty$, the last relation together with
the Borel--Cantelli lemma imply Lemma 2.
To prove (5.2) fix some small number $a>0$ and introduce the sectors
$$
\align
U_s=U_{s,n}(m)=&\left\{x\in\r,\quad \frac{as}n\le
\f(x)-\f(m)<\frac{a(s+1)}n\right\},\\
&\qquad\qquad\qquad\qquad s=0,\pm1,\pm2,\dots
\endalign
$$
We show that for $\bar m\in A_n^m(f)\cap U_s$ there exists some $\bar
K=\bar K(K,A,B,b_1,b_2,b_3)$ such that
$$
\bigl|| \bar m|-|m|\bigr|An$, $|\bar m|>An$, the above relations imply that
$$
1\le 2A^2n^2\left(1-\cos\frac an\right)+a^2\bar K^ 2\le (3A^2+\bar K)a^2
\quad \text{if }n>n_0.
$$
But this is impossible if $a>0$ is sufficiently small, hence $\bar U_0$
and $\bar U_{-1}$ are empty.
We can write
$$
E\left|A^m_n(f)\right|\le const.\sum_{s=1}^{\frac n{(\log n)^\a}}
s \sup_{m\in\bar U_s\cap\bar U_{-s-1}}
P\left(\left|\frac{|m|}{f(\f(m))}-\frac{|\bar m|}{f(\f(\bar m))}
\right|<\frac{\bar K}n\right).\tag5.3
$$
To estimate the above sum observe that by Part 2a) of Property A the
probability
density of the random vector $(f(\f(m)),f(\f(\bar m)))$ is less than
$\left(\dfrac n{s}\right)^{\tau}$ if $\bar m\in \bar U_s\cup \bar
U_{-s-1}$, and the Lebesgue measure of the set
$$
\biggl\{\bigl(f(\f(m)),f(\f(\bar m))\bigr),\quad b_1b-a
$$
Hence,
$$
\align
\Cal J_k(a,b)&=\int\limits_{\{a-b0$.
We make some comments about the content
of formula (6.1). The second term at the left-hand side is an estimate
of the volume of the domain in $\Bbb R^{2k}$ where the $k$-tuples from
$\Cal M_{k,n}$ must fall. This is a better approximation of this volume
than that given in the discussion after the formulation of the
Proposition. The second moment of $|\Cal M_{k,n}|$ is of order $n^4$
divided by some power of $\log n$ which depends on the parameters
$\eta$,
$\tilde\eta$ and $\alpha$ appearing in the definition of $\Cal M_{k,n}$.
The expression at the left-hand side of (6.1) is much smaller, since
in its estimate on the right-hand side we can divide by an arbitrary
large power
of $\log n$. Such an estimate holds only if the second term at the
left-hand side is appropriately chosen, i.e.\ if the volume of the
domain where
the points of $\Cal M_{k,n}$ must fall is computed with a sufficiently
good accuracy. Let us also remark that we have only gained
a logarithmic factor on a large
negative power by making an appropriate centering of $|\Cal M_{k,n}|$
on the left-hand side of ~(6.1).
First we show that formula (6.1) implies the Proposition. For this aim
we introduce the events
$$
\align
A_n(j_1,\dots,j_k,p_{2},&\dots,p_{k},z)=
\biggl\{f(\cdot),\quad \biggl||\Cal
M_{k,n}(f,j_1,\dots,j_k,p_{2},\dots,p_{k},z)|\\
& -\frac{\left(z+\frac12\right)n^2 \de_n^{k-1}}
{(\log n)^{2\tilde\eta}}
\HH(f,j_1,\dots,j_k)\biggr|>
\frac{n^2}{(\log n)^{M/3}}\biggr\}
\endalign
$$
in the space of continuous functions. By (6.1)
$$
P(A_n(j_1,\dots,j_k,p_{2},\dots,p_{k},z))<\frac{const.}{(\log
n)^{M/3}},
$$
and since $M>0$ can be chosen arbitrary large
$$
\aligned
\sum_{n=1}^{\infty}\;&\sum_{0\le j_10$ and
$(j_1,\dots,j_k,p_2,\dots,p_k,z)\in S_{n,k}$ we have
$$
\align
&E\left\{|\Cal M_{k,n}(f,j_1,\dots,j_k,p_2,\dots,p_k,z)|^2\right\}\\
&\qquad=\frac{\left(z+\frac12\right)^2n^4 \de_n^{2(k-1)}}
{(\log n)^{4\tilde\eta}}
E\left\{\HH(f,j_1,\dots,j_k)^2\right\}
+O\left(\frac {n^4}{(\log n)^M}\right),
\endalign
$$
where the $O(\cdot)$ is uniform in $j_1,\dots,j_k$,
$p_{2},\dots,p_{k}$ and $z$.
\endproclaim
\proclaim{\bf Lemma 6}\it For arbitrary $M>0$ and
$(j_1,\dots,j_k,p_{2},\dots,p_{k},z)\in S_{n,k}$ we have
$$
\align
&E|\Cal M_{k,n}(f,j_1,\dots,j_k,p_{2},\dots,p_{k},z)|
\HH(f,j_1,\dots,j_k)\\
&\qquad=\frac{\left(z+\frac12\right)n^2 \de_n^{(k-1)}}
{(\log n)^{2\tilde\eta}}
E\left\{\HH(f,j_1,\dots,j_k)^2\right\}
+O\left(\frac {n^2}{(\log n)^M}\right),
\endalign
$$
where the $O(\cdot)$ is uniform in $j_1,\dots,j_k$,
$p_{2},\dots,p_{k}$ and $z$.
\endproclaim
First we give an informal explanation about the proof of Lemmas ~5
and~6. The second moment at the left-hand side of the formula in
Lemma~5
can be expressed as the sum of the probabilities that two pairs of
$k$-tuples $\BM$ and $\BBM$ fall simultaneously into the set $\Cal
M_{k,n}$. This statement is expressed in formula (6.4). All terms in
this sum can be written as an integral of the
density function (introduced in Part ~2 of the definition of Property~A)
$$
p_{2k}(x_1,\dots,x_k,x_{k+1},\dots,x_{2k}|
\f(m_1),\dots,\f(m_k), \f(\bar m_1),\dots,\f(\bar m_k)) \tag6.3
$$
of the random vector
$f(\f(m_1)),\dots,f(\f(m_k))$, $f(\f(\bar m_1)),\dots,f(\f(\bar m_k))$.
The sum of these integrals can be considered as the approximating sum of
an integral in an appropriate domain. As the subsequent
calculation will show, this integral equals the main term of
the right-hand side of the formula in Lemma ~5. Lemma~5 gives a bound on
the error which is committed when the integral expressing $E\Cal H_n^2$
multiplied with the constant appearing in Lemma~5 is replaced by the sum
by which we expressed the left-hand side.
This
error is small, because by Part 2b) of Property ~A the density function
$(6.3)$ depends continuously on its arguments $x_1,\dots, x_{2k}$ and
$\f(m_1),\dots,\f(m_k)$, $\f(\bar m_1),\dots,\f(\bar m_k)$. But
this property supplies a good estimate only if all differences
between the angles $\f(m_s)$ and $\f(\bar m_s)$ are not too small. The
difference between $\f(m_s)$ and $\f(m_{s'})$ or $\f(\bar m_{s'})$ is
bigger than $\log n^{-\beta}$,
if $s\neq s'$ because of the existence of the buffer zones $\C_j$,
and the same statement holds for $\f(\bar m_s)$.
But $\f(m_s)-\f(\bar m_s)$ can be very small.
Hence we fix some large positive number $\gamma$ and split the sum
(6.4) which expresses $\Cal M_{k,n}$ into two parts. The first sum
contains all pairs such that
$|\f(m_s)-\f(\bar m_{s})|>\log n^{-\gamma}$ with some fixed $\gamma>0$
for all $s=1,\dots,k$. This sum can be
approximated by an appropriate integral very well because of Part~2b)
of Property~A, and this is the content
of Lemma~7B. The remaining sum can be bounded sufficiently well for our
purposes because of Part ~2a) of Property~A, and this is done in
Lemma~7A. The integral appearing in Lemma~7B is not equal to the main
term at the right-hand side of the formula in Lemma~5,
because the domain of integration was diminished by not taking
all terms in the sum (6.4). But we show with the help of formula (6.11)
that this change of domain of integration causes only a negligible
error. These estimates together imply Lemma~5. The proof of Lemma
~6 is analogous. Here Lemmas~8B and ~8A give the estimate of the main
and the error term if we split the sum expressing the left-hand side
of the formula in Lemma 6 in an appropriate way.
To carry out the above program we introduce
for fixed numbers $k$, $j_1$,\dots, ~$j_k$, $p_{2}$,\dots, $p_{k}$ and
$z$ the notation
$\Cal M_n=\Cal M_{k,n}(f,j_1,\dots,j_k,p_{2},\dots,p_{k},z)$, where
$\Cal M_{k,n}$ was defined in (4.1).
Let $\Cal Z=\Cal Z_k$ denote the set
$$
\Cal Z=\Cal Z_k=\{\BM,\quad m_s\in\z\text { for all
}s=1,\dots,k\},
$$
and put
$$
\Cal F_n=\Cal M_n\times\Cal M_n.
$$
Clearly,
$$
E\left\{|\Cal
M_{k,n}(f,j_1,\dots,j_k,p_{2},\dots,p_{k},z)|^2\right\}
=\sum_{\bm\in\Cal Z}\sum_{\bbm\in\Cal Z}P((\bm,\bbm)\in\Cal F_n).
\tag6.4
$$
To prove Lemma 5 we have to estimate the sum at the right-hand side of
(6.4). We shall split this sum into two parts and handle differently
those pairs $(\bm,\bbm)$, $\BM$ and $\BBM$, for which
$|\f(m_s)-\f(\bar m_s)|$ is very small
for some $1\le s\le k$ and those pairs for which all these differences
are not too small. To formulate this statement in a more explicit way we
introduce some notations.
Let us fix some very large $\gamma>0$ which may depend on
$k$, but not on $n$ or on $j_1,\dots,j_k$, $p_{2},\dots,p_{k}$ and
$z$. (This number will be chosen much bigger than $\a$, $\beta$, $\eta$
and ~$\tilde\eta$.) Define the set
$$
\align
G_n=\{(t_1,\dots,t_k,\bar t_1,\dots,\bar t_k),&\quad t_s,\bar t_s\in
\Bbb R^1,\\
&\quad \f_{2j_s}\le t_s,\bar t_s<\f_{2j_s+1},\;s=1,\dots,k\},
\endalign
$$
and split it into two disjoint sets $G_n^{(1)}$ and $G_n^{(2)}$ in the
following way: For $\f_{2j_s}\le t<\f_{2j_s+1}$ define
$\ell(t)$, $0\le\ell(t)1\text{ for all }1\le s\le k \}
$$
and
$$
G_n^{(2)}=\{(t_1,\dots,t_k,\bar t_1,\dots,\bar t_k)\in G_n,\quad
|\ell(t_s)-\ell(\bar t_s)|\le1\text{ for some }1\le s\le k\}.
$$
Clearly,
$$
G_n=G_n^{(1)}\cup G_n^{(2)}.
$$
Given some measurable $B\subset \Bbb R^{2k}$ define the integral
$$
\aligned
I(B)=\!\!\!\!\!\int\limits \Sb(t_1,\dots,t_k,\bar t_1,\dots,\bar t_k)\in B\\
(x_1,\dots,x_k,\bar x_1,\dots,\bar x_k)\in{\Bbb R}^{2k}\endSb
\!\!\!\!\!&x_2^2\cdots x^2_k\bar x_2^2\cdots\bar x_k^2\\
&p(x_1,\dots, x_k,\bar x_1,\dots,\bar x_k|
t_1,\dots, t_k,\bar t_1,\dots,\bar t_k) \\
&\quad dx_1\dots\,dx_k\,d\bar x_1\dots\,d\bar x_k
\,dt_1\dots\,dt_k\,d\bar t_1\dots\,d\bar t_k
\endaligned \tag6.5
$$
Since for fixed
$(t_1,\dots,t_k,\bar t_1,\dots,\bar t_k)$
$$
\aligned
\int_{(x_1,\dots,x_k,\bar x_1,\dots,\bar x_k)\in \Bbb R^{2k}}
&x_2^2\cdots x_k^2\bar x_2^2\cdots\bar x_k^2\\
&p(x_1,\dots, x_k,\bar x_1,\dots,\bar x_k|
t_1,\dots, t_k,\bar t_1,\dots,\bar t_k) \\
&\quad dx_1\dots dx_k\,d\bar x_1\dots\,d\bar x_k\\
&\quad\qquad=Ef^2(t_2)\cdots f^2(t_k)f^2(\bar t_2)\cdots f^2(\bar t_k),
\endaligned \tag6.6
$$
hence
$$
E\left\{\HH(f,j_1,\dots, j_k)^2\right\}
=I(G_n)=I(G_n^{(1)})+I(G_n^{(2)}). \tag6.7
$$
Let us also observe that since the right-hand side of (6.6) is bounded,
hence (6.5) and (6.6) imply that
$$
I(B)\le const.\lambda (B),\tag6.8
$$
where $\lambda (B)$ denotes the Lebesgue measure of the set $B$ in $\Bbb
R^{2k}$.
We split the set $\Cal F_n$ into two disjoint sets $\FI$ and $\FO$ in
the following way:
$$
\align
\FI&=\{(\bm,\bbm)=((m_1,\dots,m_k),(\bar m_1,\dots,\bar m_k)) \in \Cal
F_n; \\
&\qquad\qquad (\f(m_1),\dots,\f(m_k),\f(\bar m_1),\dots,\f(\bar m_k))
\in G_n^{(1)}\},\\ %{\vspace=1\jot}
\FO&=\{(\bm,\bbm)=((m_1,\dots,m_k),(\bar m_1,\dots,\bar m_k)) \in \Cal
F_n;\\
&\qquad\qquad (\f(m_1),\dots,\f(m_k),\f(\bar m_1),\dots,\f(\bar m_k))
\in G_n^{(2)}\}.
\endalign
$$
Put
$$
\Cal I(\FI)
=\sum_{\bm\in\Cal Z}\sum_{\bbm\in\Cal Z}P((\bm,\bbm)\in\FI)\tag6.9
$$
and
$$
\Cal I(\FO)
=\sum_{\bm\in\Cal Z}\sum_{\bbm\in\Cal Z}P((\bm,\bbm)\in\FO).\tag6.9$'$
$$
Then we have
$$
E\left\{|\Cal
M_{k,n}(f,j_1,\dots,j_k,p_{2},\dots,p_{k},z)|^2\right\}
=\Cal I(\FI)+\Cal I(\FO). \tag$6.10$
$$
It follows from (6.8) and the observation that
$\lambda(G_n^{(2)})0$.
\endproclaim
\proclaim{\bf Lemma 7B}\it For all $\g>0$
$$
\left|\Cal
I(\FI)-\frac{n^4\left(z+\frac12\right)^2\de_n^{2(k-1)}}{\log
n^{4\tilde\eta}}I(G_n^{(1)})\right| 0$.
\endproclaim
(In our problem the upper bound $const.n^3(\log n)^{K}$ in Lemma 7B can
be replaced by the weaker estimate $n^4(\log n)^{-M}$ with a
sufficiently large $M>0$.)
We reduce the proof of Lemma 6, similarly to Lemma 5, to two Lemmas ~8A
and ~8B.
To formulate them we introduce the following quantities. Given some
$\BM\in \Cal Z=\Cal Z_k$ define the sets
$$
G_n^{(i)}(\bm)=\left\{(t_1,\dots,t_k)\in
\Bbb R^k, \quad
(\f(m_1),\dots,\f(m_k),t_1,\dots, t_k)\in G_n^{(i)}\right\}
$$
and the integrals
$$
\Cal J(G_n^{(i)}(\bm))= \Cal J(G_n^{(i)}(\bm),f)=\int_
{(t_1,\dots,t_k)\in G_n^{(i)}(\bm)} f^2(t_2)\cdots
f^2(t_k)\,dt_1\dots\,dt_k $$
for $i=1,2$.
The identity
$$
\aligned
&E\Cal M_{k,n}(f,j_1,\dots,j_k,p_{2},\dots,p_{k},z)
\HH(f,j_1,\dots,j_k)\\
&\qquad=
\sum_{\bm\in \Cal Z} E\chi(\bm\in \Cal M_n)\Cal J(G_n^{(1)}(\bm))
+E\chi(\bm\in \Cal M_n)\Cal J(G_n^{(2)}(\bm)),
\endaligned \tag6.12
$$
holds.
Hence Lemma 6 follows from
formulas (6.7), (6.11), (6.12) and the following lemmas.
\proclaim{\bf Lemma 8A}\it If $\g=\g(M,k)$ is sufficiently large, then
$$
\sum_{\bm\in \Cal Z}
E\chi(\bm\in \Cal M_n)\Cal J(G_n^{(2)}(\bm))
0$.
\endproclaim
\proclaim{\bf Lemma 8B}\it For all $\g>0$
$$
\left|\sum_{\bm\in \Cal Z}E\chi(\bm\in \Cal M_n)\Cal J(G_n^{(1)}(\bm))
-\frac{n^2\left(z+\frac12\right)\de_n^{(k-1)}}{\log
n^{2\tilde\eta}}I(G_n^{(1)})\right| 0$.
\endproclaim
\beginsection 7. Proof of Lemmas 7A and 8A
\demo{\it Proof of Lemma 7A}
Fix the numbers $j_1$,\dots, $j_k$. Let us split the sets $\DD_{j_s}$
into
smaller sectors $\Bbb U_{s,l}$, $l=1,\dots,\dfrac n{(\log n)^\a}$
defined by the formula
$$
\Bbb U_{s,l}=\left\{x\: x\in\DD_{j_s},\quad\f_{2j_s}+\frac{l-1}n\le
\f(x)<\f_{2j_s}+\frac ln \right\}.
$$
Fix some positive number $K>0$, and define the set
$$
\align
{\Bbb B}(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k)&=
\Bbb B(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k,j_1,\dots,j_k,f)\\
&=\biggl\{(\bm,\bbm)=((m_1,\dots,m_k),(\bar m_1,\dots,\bar
m_k))\in
\Cal Z\times \Cal Z,\\
&\qquad m_s\in \Bbb U_{s,l_s},\; \bar m_s\in \Bbb
U_{s,\bar l_s}, \quad s=1,\dots,k,\\
&\qquad
\left|\frac{|m_1|}{f(\f(m_1))}- \frac{|m_s|}{f(\f(m_s))}\right|
<\frac Kn,\\ \vspace{1\jot}
&\qquad\left|\frac{|\bar m_1|}{f(\f(\bar m_1))}-
\frac{|\bar m_s|}{f(\f(\bar m_s))}\right|<\frac Kn,\quad
s=2,\dots,k\biggr\}.
\endalign
$$
Introduce the random variables
$$
\zeta(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k)=
|\Bbb B(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k)|.
$$
The estimate
$$
\Cal I(\FO)\le \sum\Sb 0\le l_s,\bar l_s\le \frac n{(\log n)^\a}\text{
for all } s=1,\dots,k,\\
|l_s-\bar l_s|\le \frac{2n}{(\log n)^\g} \text { for some }1\le s\le
k\endSb
E\zeta(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k) \tag7.1
$$
holds. We prove some bounds on the expressions
$E\zeta(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k)$.
The cases when $|l_s- \bar l_s|>1$ for all $s=1,\dots,k$ and when
$|l_s-\bar l_s|\le1$ for some
$1\le s\le k$ will be handled differently. First remark that all
sets $\Bbb U_{s,l}$ contain less than $const. n$ lattice points.
We also show that
$$
\aligned
&\left||\bar m_s|-|m_s|+|m_1|\frac{f(\f(m_s))}{f(\f(m_1))}
-|\bar m_1|\frac{f(\f( m_s))}{f(\f(\bar m_1))}\right|
1$ for all $s=1,\dots, k$.
Fix the values of
$f(\f(m_1)),\dots,f(\f(m_k))$ and $f(\f(\bar m_1))$ and estimate
conditional expectation
$$
\align
E(\zeta^{\bm,\bar m}&(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k)\mid\\
&\qquad f(\f(m_1))=x_1,\dots, f(\f(m_k))=x_k,f(\f(\bar m_1))=\bar x_1).
\endalign
$$
Because of (7.2) we can determine with the help of the values
of $f(\f(m_1))$,\dots, $f(\f(m_k))$ and $f(\f(\bar m_1))$ a
set consisting of at
most $const.\prod\limits_{s=2}^k |l_s-\bar l_s|$ vectors $\bbm$ in such
a way
that only the vectors $(\bm,\bbm)$ with these $\bbm$ can be in the set
$$
\Bbb B^{\bm,\bar m}(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k).
$$
Let us
estimate the conditional probability of the event that such a vector
$\bbm$ really belongs to this set.
The conditional density of the random vector $f(\f(\bar
m_2)),\dots, f(\f(\bar m_k))$ with respect to the condition
$f(\f(m_1))=x_1$, \dots, $f(\f(m_k))=x_k$, $f(\f(\bar m_1))=\bar x_1$
can be bounded by
$$
\llog\frac{\dsize{ \prod_{s=1}^k \ffrac^\tau} }
{p(\xdotsx,\bar x_1|\fdotsf,\f(\bar m_1))}. \tag7.4
$$
We shall show that this conditional density function has the above
estimate for all $f(\f(\bar m_2))=\bar x_2,\dots,f(\f(\bar m_k))=\bar
x_k$.
Relation (7.4) follows from the inequality
$$
\align
&p(\xdotsx,\bar x_1,\dots,\bar x_k|\fdotsf,\f(\bar m_1),
\dots,\f(\bar m_k))\\
&\qquad\qquad\qquad<\llog \prod_{s=1}^k \ffrac^\tau.
\endalign
$$
The last inequality holds because of Part 2b) of Property A and the
following
observations: $|\f(m_s)-\f(\bar m_s)|>\dfrac{|l_s-\bar l_s|-1}n$,
and all
other terms $|\f(m_s)-\f(m_{s'})|$, $|\f(\bar m_s)-\f(m_{s'})|$ and
$|\f(\bar m_s)-\f(\bar m_{s'})|$ which appear in the upper bound of
the density we are considering are greater than $(\log n)^{-\beta}$.
(This statement holds, because there is a sector ${\C}_j$ between
these points.)
We claim that
$$
\aligned
P(&\bbm\in \bmm(\ldotsl)|\\
&\qquad\fdotsx, f(\f(\bar m_1))=\bar x_1)\\
&\qquad<\llog\frac{ \dsize{n^{-2k+2}\prod_{s=1}^k \ffrac^\tau}}
{p(\xdotsx,\bar x_1|\fdotsf,\f(\bar m_1))},
\endaligned \tag7.5
$$
and the conditional expectation of $\zmm$ satisfies the inequality
$$
\aligned
&E\bigl(\zmm(\ldotsl)|\\
&\qquad\qquad\fdotsx, f(\f(\bar m_1))=\bar x_1\bigr)\\
&\qquad<\llog\frac{\dsize{ n^{-2k+2}\prod_{s=1}^k \ffrac^\tau
\prod_{s=2}^k |l_s-\bar l_s|}}
{p(\xdotsx,\bar x_1|\fdotsf,\f(\bar m_1))}.
\endaligned \tag$7.5'$
$$
Indeed,
to calculate the conditional probability in (7.5) we have to integrate the
conditional density which was bounded in (7.4) with respect to the
variables $\bar
x_2,\dots,\bar x_k$ by the Lebesgue measure on an appropriate set. But
by the second line of formula (7.3) this set is contained in the set
$$
\left\{\left|\bar x_s-\frac{|\bar m_s|}{|\bar m_1|}x_1\right|<\frac
C{n^2},\;s=2,\dots,k\right\}
$$
which is a set with Lebesgue measure
less than
$const.n^{-2k+2}$. This fact together with our bound on the conditional
density imply the bound on the conditional density, and the estimate on
the conditional expectation is obtained if we remark that it is the sum
of the conditional probability of at most $const.\prod\limits_{s=2}^k
|l_s-\bar l_s|$ terms.
Finally we show that
$$
\aligned
E\zmm(\ldotsl)& <\llog n^{3-3k}\prod_{s=1}^k\ffrac^{\tau -1}\\
&\qquad\qquad \text {for all pairs }
(\bm,\bar m).
\endaligned\tag7.6
$$
To prove (7.6) we make the following observations:
The expectation of $\zmm$ can be obtained by integrating the left-hand
side of $(7.5')$ with respect to the measure
$$
p(\xdotsx,\bar
x_1|\fdotsf,\f(\bar m_1))\,dx_1\dots\,dx_k\,d\bar x_1
$$
on a subset of
$$
\align
A=&\biggl\{(\xdotsx,\bar x_1),\quad c_1\le x_1\le c_2,\\
&\qquad\left|x_s-\frac{|m_s|}{|m_1|}x_1\right|<\frac
C{n^2},\;s=2,\dots,k,
\quad \text{and } |x_1-\bar x_1|<\frac{C|l_1-\bar l_1|}n\biggr\},
\endalign
$$
where $C>0$ is some appropriate constant. The first inequalities in the
definition of the set $A$ appeared because of the definition of
${\Bbb B}(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k)$, and the last one,
since
$$
|f(\f(\bar m_1))-f(\f(m_1))|\le b_3|\f(\bar m_1)-\f(m_1)|
\le 2b_3\frac{|l_1-\bar l_1|} n ,
$$
because of the Lipschitz one property of the function $f(\cdot)$.
Now formula (7.6) follows from $(7.5')$ and the fact that the Lebesgue
measure of the set $A$ is less than $const.n^{1-2k}|l_1-\bar l_1|$.
Let us now consider the case when there are $p\ge1$ indices $s$ such
that $|l_s-\bar l_s|\le1$. We claim that in this case
$$
\aligned
E\zmm(\ldotsl)& <\llog n^{-3k+p+3-\e}\pprod\ffrac^{\tau -1}\\
&\qquad\qquad \text {for all pairs }
(\bm,\bar m).
\endaligned\tag$7.6'$
$$
with $\e=\dfrac{2-\tau}\tau$, where $\pprod $ denotes product with
indices $s\in V$ with
$$
V=\{s; \quad 1\le s\le k \text{ and } |l_s-\bar l_s|\ge 2\}.
$$
We prove ($7.6'$) with some refinement of the proof of (7.6). We may
assume
that $1\notin V$, i.e. $|l_1-\bar l_1|\le1$ with the help of the
following observation. The set
${\Bbb B}(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k)$ becomes smaller if
we
make an arbitrary permutation of the indices $s$ and choose $K/2$
instead of $K$ in its definition. On the other hand, the order of
the angles
$\f(m_s)$ has no importance in the subsequent arguments. We shall
consider the following two cases separately:
\item{ a)} $|\f(m_1)-\f(\bar m_1)|>n^{-1-\e}$,
\item{ b)} $|\f(m_1)-\f(\bar m_1)|\le n^{-1-\e}$.
In case a) we bound the conditional expectation of $\zmm$ under the
condition $\fdotsx, f(\f(\bar m_1))=\bar x_1$ similarly to the already
investigated case. Because
of (7.2) we can determine a set of vectors $\bbm$ with cardinality
$const.\pprod |l_s-\bar l_s|$
in such a way that only the pairs $(\bm,\bbm)$ with these $\bbm$ can be
in the set
${\Bbb B}^{\bbm,\bar m}(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k)$.
Arguing similarly
as before, with the difference that now the conditional density of the
vector $(\f(f(\bar m_s)),\;s\in V)$ is estimated, we get that
$$
\aligned
&E(\zmm(\ldotsl)|\\
&\qquad\qquad\qquad\fdotsx, f(\f(\bar m_1))=\bar x_1)
\\ &\qquad<\frac{\dsize{\llog n^{-2k+2p}\pprod \ffrac^\tau |l_s-\bar
l_s|}} {p(\xdotsx,\bar x_1|\fdotsf,\f(\bar m_1))}| \f(m_1)
-\f(\bar m_1)|^{-\tau}.
\endaligned
$$
Now we get similarly to the previous case by integrating the
conditional expectation with respect to the distribution of the
condition
$$
f(\f(m_1))=x_1,\dots,f(\f(m_k))=x_k, f(\f(\bar m_1))=\bar x_1
$$
that
$$
\aligned
E\zmm(\ldotsl)& <\llog n^{2-3k+p}\\
&\qquad\pprod\ffrac^{\tau -1}
|\f(m_1)-\f(\bar m_1)|^{1-\tau}\\
&\qquad\qquad \text {for all pairs }
(\bm,\bar m).
\endaligned
$$
We only have to observe that in the present case the Lebesgue measure
of the set where we integrate the conditional expectation is less
than
$$
const.n^{-2k+2}|\f(m_1)-\f(\bar m_1)|.
$$
Indeed, it is contained
in a set defined analogously to the set $A$ defined in the previous
case, only the last inequality in its definition must be replaced with
the inequality $|x_1-\bar x_1|< C|\f(m_1)-\f(\bar m_1)|$.
This estimate implies $(7.6')$, since
$|\f(m_1)-\f(\bar m_1)|^{1-\tau}>n^{1-\e}$ in case a).
In case b) we show that
$$
\aligned
&E(\zmm\left(\ldotsl)|\fdotsx \right)\\
&\qquad\qquad<\llog\frac{\dsize{ n^{-2k+2p+1-\e}\pprod \ffrac^\tau
|l_s-\bar l_s|}} {p(\xdotsx|\fdotsf)}.
\endaligned \tag7.7
$$
In this case we estimate the conditional expectation when the
value of $f(\f(\bar m_1))$ is not prescribed in the condition.
Nevertheless, the value of $f(\f(\bar m_1))$ can be determined by means
of the
conditioning terms appearing at the left-hand side of (7.7) with a
precision of $const.n^{-1-\e}$
because of the Lipschitz one property of the function
$f(\cdot)$. Hence we can determine, with the help of relation ~(7.2),
$const.\pprod|l_s-
\bar l_s|$ elements $\bbm$ in such a way that under the
conditioning at the left-hand side of (7.7) the event
$$
(\bm,\bbm)\in
{\Bbb B}^{\bm,\bar m}(l_1,\dots,l_k,\bar l_1,\dots, \bar l_k)
$$
can take place only with these elements $\bbm$.
To prove relation (7.7) we remark that the conditional density function
of the vector
$\{f(\f(\bar m_s))=\bar x_s,\;s\in V\}$ with respect to the condition
$$
\{f(\f(m_1))= x_1,\dots,f(\f(m_k))= x_k\}
$$
is bounded by
$$
\llog\frac{\dsize{ \pprod \ffrac^\tau}}
{p(\xdotsx|\fdotsf)}, \tag7.8
$$
and for any $\bmm$
$$
\aligned
P(&\bbm\in \bmm(\ldotsl)|
\fdotsx)\\
&\qquad<\llog\frac{ \dsize{ n^{-2k+2p+1-\e}\pprod \ffrac^\tau}}
{p(\xdotsx|\fdotsf)}.
\endaligned \tag$7.8'$
$$
The estimate ($7.8'$) follows from the estimate (7.8) on the conditional
density and the following observation. To calculate the conditional
probability of the event \hbox{$\bbm\in \bmm(\ldotsl)$} one has to
integrate the conditional density bounded by formula ($7.8$) on the set
$$
\align
A^*=A^*(x)=&\biggl\{(\bar x_s,\;s\in V),\quad
\left|\bar x_s-\frac{|\bar m_s|}{|\bar
m_{s^*}|}\bar x_{s^*}\right|<\frac C{n^2},\\
&\qquad\qquad\text{ for all }s\in V
\text{ and }
\left|\bar x_{s^*}-\frac{|\bar m_{s^*}|}{|\bar
m_1|}x_1\right|0$ and arbitrary $s^*\in V$. (We may
assume that $V$ is non-empty, i.e.\ $p1 \text { for all }s=1,\dots,k
\tag7.9\\
&<\llog n^{4-2k+p-\e}\pprod\ffrac^{\tau-1}\\
&\qquad \text {if there are } p\ge1\text{ indices }s
\text { such that } |l_s-\bar l_s|\le 1
\endalign
$$
Let us remark that for all $\tilde l$ the equation $l_s-\bar l_s=\tl$
has less than $n$ solutions. Hence relations (7.1) and (7.9) imply that
$$
\Cal I(\FO)\le\llog\left( \ssum 0 + \sum_{p=1}^k \ssum p\right)
$$
with
$$
\ssum 0=n^{4-k}\sum\Sb 1<|\tl_s|\le \frac n{(\log
n)^\a},\;s=1,\dots,k\\
|\tl_s|<\frac n{(\log n)^\gamma}\text { for some }1\le s\le k\endSb
\prod_{s=1}^k \nlt
$$
and
$$
\ssum p=n^{4-k+p-\e}\sum_{ 1<|\tl_s|\le \frac n{(\log
n)^\a},\;s=1,\dots,k-p} \prod_{s=1}^{k-p} \nlt.
$$
We have
$$
\align
\ssum 0&\le const. n^{4-k}\left(\sum_{p=1}^{\frac n{(\log n)^\a} }
\left(\frac n{p}\right)^{\tau-1}\right)^{k-1}
\sum_{p=1}^{\frac n{(\log n)^\g}}
\left(\frac n{p}\right)^{\tau-1}\\
&\le const. n^{4}(\log n)^{-\g(2-\tau)}
\endalign
$$
and
$$
\ssum p\le const. n^{4-k+p-\e}\left(\sum_{p=1}^{\frac n{(\log n)^\a}}
\left(\frac n{p}\right)^{\tau-1}\right)^{k-p}0$ is sufficiently large. Lemma 7A is proved.
\qed
\enddemo
\demo{\it Proof of Lemma 8A}
Since $An<|m_s|\g(M,k)
\endaligned \tag7.10
$$
to prove Lemma 8A. Let us introduce the notation $\bar t_s=\f(m_s))$,
$s=1,\dots,k$. We can rewrite the expression in formula (7.10) in the
following form:
$$
\aligned
E\chi(\bm\in \Cal M_n)\Cal J(G_n^{(2)}(\bm))&=\int\limits\Sb
(\bar x_1,\dots,\bar x_k)\in D\\
( x_1,\dots, x_k)\in \Bbb R^k\\
(t_1,\dots,t_k)\in G^{(2)}_n(\bm)\endSb
x_2^2\cdots x_k^2\\
&\qquad p(x_1,\dots,x_k,\bar x_1,\dots,\bar
x_k|t_1,\dots,t_k,\bar t_1,\dots,\bar t_k)\\
&\qquad\qquad dt_1\dots\,dt_k\,dx_1\dots\,dx_k\,d\bar x_1\dots\,d\bar
x_k \endaligned
$$
with
$$
D=\biggl\{(\bar x_1,\dots,\bar x_k),\quad x_1\in I_z,\;\left|
\frac{|m_s|}{\bar x_s}-\frac{|m_1|}{\bar x_1}\right|\in I_{p_{j_s}}\biggr\}.
$$
Let us first fix $t_1,\dots,t_k$ and $\bar x_1,\dots, \bar x_k$, and
integrate with respect to the variables $x_1,\dots,x_k$. Because of the
Lipschitz one property of the function $f(\cdot)$, the density
$p(\cdot|\cdot)$ is concentrated on the set $|x_i-\bar x_i|\le
b_3|t_i-\bar t_i|$, and $|x_i|\g(k,M)$. Lemma 8A is proved. \qed
\enddemo
\beginsection8. The proof of Lemmas 7B and 8B
\demo{\it Proof of Lemma 7B}
Let us first observe that
$$
\aligned
&|p(\tilde x_1,\dots,\tilde x_{2k}|\tilde t_1,\dots,\tilde t_{2k})-
p( x_1,\dots,x_{2k}| t_1,\dots, t_{2k})|(\log n)^{-\g}$, if
$(t_1,\dots,t_{2k})\in G_n^{(1)}$. (Here $\pi(t_s)$, $s=1,\dots,2k$,
denotes again the monotone ordering of the sequence $t_s$,
$s=1,\dots,2k$.)
The relation $\BM\in \Cal M_n$ holds if and only if
$$
\align
&f(\f(m_s))\in I\left(m_1,m_s,f(\f(m_1))\right)\\
&\qquad=\left[\frac{|m_s|}{\dfrac{|m_1|}{f(\f(m_1))}
+\dfrac{(p_{j_s}+1)\de_n}{|m_1|}f(\f(m_1))}, \,
\frac{|m_s|}{\dfrac{|m_1|}{f(\f(m_1))}
+\dfrac{p_{j_s}\de_n}{|m_1|}f(\f(m_1))}\right],\\ \vspace{1\jot}
&\qquad\qquad\qquad s=2,\dots,k,\quad\f(m_s)\in [\f_{2j_s},\f_{2j_s+1}],\quad
s=1,\dots,k,\\
&\qquad\qquad\qquad \text{ and }|m_1|\in \tilde I_z \tag8.2
\endalign
$$
Since $\bar An<|m_1|<\bar Bn$ and $00$ instead of the inequality $|f(\f_2)-f(\f_{1}|\le
const.|\f_2-\f_1|$, then the bounds we get are worse with a
multiplying factor which is a power of $\log n$. Such estimates are
appropriate
in the proofs of Lemmas 7A and 8A, if the exponent $\g$ is chosen
sufficiently large in them. Hence we make the following approach.
Let us consider the event
$$
F(D,n)=\left\{
\sup_{0\le\f_1<\f_2\le\theta}\frac{|f(\f_1)-f(\f_2)|}{|\f_1-\f_2|}
0$. The estimates given in the proof of
Section 7
work in this case too, the only difference is that now an additional
multiplying factor
$(\log n)^{2k}$ appears. But this term causes no problem if $\g>0$ is
chosen sufficiently large. The same argument works in the proof of
Lemma ~8A and Lemma~2, but in the proof of Lemma ~3 we have to be more
careful.
The problem which arises in this case is that although $\alpha$ can be
chosen large in the definition of the sets $\DD_j(n)$, but it
cannot
depend on the number $k$ appearing in this lemma. We have to bound the
expression in formula (5.4) more carefully. Actually, it is enough
to bound the expression
$$
E|B_{n,k}^m(f)|\chi(F'(\e_k,D_k))\tag9.1
$$
with some appropriate $\e_k>0$ and $D_k>0$, where
$$
F'(\e,D)=\left\{f;\quad \sup_{0\le\f_1<\f_2\le\theta}
(\log n)^\e\le\frac{|f(\f_1)-f(\f_2)|}{|\f_1-\f_2|} (\log n)^{\e_k}\right)^{1/2}\\
&