K_2 \e\)\\ &\qquad\qquad\qquad \le K_2\e \quad \text{for almost all } \oo\in\Omega \endaligned \tag3.2 $$ with an appropriate constant $K_2>0$. First we show that relations (3.1) and (3.2) imply relation (2.6) with $K=3^{1/\alpha}(K_1+K_2)$. Indeed, if for some $\oo\in\Omega$ there is an index $k$, $n_0\le k\le N(n)$ with some $n_0=n_0(\e)$, such that it gives a non-zero contribution to the sum in (2.6) with the choice $K=3^{1/\alpha}(K_1+K_2)$, i.e. $\supp_{1\le s\le k}|U_s(\oo)|>3^{1/\alpha}(K_1+K_2)A_k\e$, then consider that interval $(m(j-1),m(j)]$, $1\le j\le L_n$, which contains this number $k$. In this case one of the following relations holds. Either $$ \align &\supp_{1\le s\le j}|U_{m(s)}(\oo)|>3^{1/\alpha}K_1A_k\e\ge K_1A_{m(j)}\e\\ \intertext{or} \supp_{1\le s\le j}&\supp_{m(s-1)

3^{1/\alpha}K_2A_k\e\ge K_2A_{m(j)}\e.
\endalign
$$
Then the contribution of the terms with indices in the interval
$(m(j-1),m(j)]$ to the sum in the expression (2.6) is not greater
than $\log\dfrac{B_{m(j)+1}}{B_{m(j-1)+1}}$, and such a contribution
appears in the $j$-th term of one of the sum (3.1) or (3.2).
Hence relations (3.1) and (3.2), the identity $m(L_n)=N(n)$ together
with a summation for $1\le j\le L_n$ imply formula~(2.6).
To prove relation (3.1) introduce the random variables
$$
T_s(\oo)=U_{m(s)}(\oo)-U_{m(s-1)}(\oo), \quad s=1,2,\dots.
$$
The random variables $T_s(\oo)$, $s=1,2,\dots$ are independent. This
statement is equivalent to the independence of the random variables
$U_{N(n,k)}(\oo)-U_{N(n,k-1)}(\oo)$, $n=1,2,\dots$, $k=1,\dots,l_n$,
and this is a condition imposed in the formulation of the Basic Lemma.
Since $\limm_{n\to\infty}\dfrac{B_{N(n+1)}}{B_{N(n)}}=2$,
$A_n=B_n^{1/\alpha}$, there is some $n_0>0$ such that
$$
A_{N(n)}\ge 2^{(n-k)/2\alpha}A_{N(k)}\quad\text{for arbitrary }
n\ge n_0 \text{ and }k\le n.
$$
For all $s=1,2,\dots$ define the number $R(s)$ which satisfies the
inequality $L_{R(s)-1}< s\le L_{R(s)}$. The number $R(s)$ counts the
number of the form $N(l,0)=N(l-1)$ among the first $s$ terms of the
sequence $N(n,k)$. This fact and the content of the value of $m(j)$
imply that $N(R(s)-1))< m(s)\le N(R(s))$. (The sequence $R(s)$ is the
``inverse" of the monotone sequence $L_s$. The relation $R(L_s)=s$
holds.) Hence
$$
\frac {A_{m(j)}}{A_{m(s)}}\ge
2^{(R(j)-R(s)-1)/2\alpha}\quad\text{for
}1\le s\le j\text { and }j\ge n_0.
$$
Let us fix some $j\ge n_0$. We shall show by applying the above relation
for $s\le j$ and by putting in one block those indices $s$ for which
$N(r-1) K_2 \e\)\\
&\qquad\qquad\le2 \sum_{u=1}^r\(I\(Z_u^{(1)}(\oo)>
\frac{K_2\e2^{(r-u-1)/\alpha}}2\)+I\(Z_s^{(2)}(\oo)>
\frac{K_2\e2^{(r-u-1)/\alpha}}2\)\)
\endalign
$$
for all $r\ge n_0$. Indeed, the left-hand side of this inequality is
non-zero only if one of the term at the right side is non-zero. In this
case the left hand-side is bounded by
$\summ_{j=L_{r-1}+1}^{L_{r}}\log\dfrac{B_{m(j)+1}}{B_{m(j-1)+1}}\le 2$,
and one of the summands at the right-hand side is non-zero, since
$Z_u^{(1)}(\oo)+Z_u^{(2)}(\oo)> 2^{-u/\alpha}K_2\e A_{m(j)}\ge
K_2\e2^{(r-u-1)/\alpha}$ for some $1\le u\le r$. Hence the inequality
also holds in this case. By summing up this inequality for $r=1,\dots,n$
we get the following bound for the expression in (3.2):
$$
\align
&\limsup_{n\to\infty}\frac1n \sum_{j=1}^{L_{n}}\log\frac
{B_{m(j)+1}}{B_{m(j-1)+1}} I\(\frac{\supp_{1\le
s\le j}\supp_{m(s-1) K_2 \e\)\\
&\qquad\le \limsup_{n\to\infty}\frac2n \sum_{r=1}^n\sum_{u=1}^r
\biggl(I\(Z_u^{(1)}(\oo)>\frac{K_2\e2^{(r-u-1)/\alpha}}2\) \tag3.9 \\
&\hskip6.5truecm +I\(Z_u^{(2)}(\oo)>\frac{K_2\e2^{(r-u-1)/\alpha}}2\)
\biggr). \endalign
$$
Let us define the random variables
$$
X_u^{(i)}(\oo)=\sum_{p=0}^\infty I\(Z_u^{(i)}(\oo)\ge\frac{K_2\e}4
2^{(p-1)/\alpha}\), \quad u=0,1,2,\dots,\quad i=1,2.
$$
Then by changing the order of summation at the right-hand side of (3.9)
we get that the left-hand side of formula (3.2) can be bounded by the
expression
$$
\limsup_{n\to\infty}\frac2n\sum_{u=1}^n\(X_u^{(1)}(\oo)+X_u^{(2)}(\oo)\).
$$
The averages of the random variables
$X_u^{(1)}(\oo)+X_u^{(2)}(\oo)-EX_u^{(1)}(\oo)-EX_u^{(2)}(\oo)$ tend to
zero with probability one. Indeed, the random variables $X_u^{(i)}(\oo)$
satisfy the laws of large numbers both for $i=1$ and $i=2$, because
they are independent, and by relation (3.8) the moments of these random
variables are finite. (The estimates $P(X_u^{(i)}>x)\le C_2
2^{-\gamma x/\alpha}$, $i=1,2$, $u\ge n_0$ follows from relation (2.4)
if $K_2>0$ is chosen sufficiently large. This can be proved similarly to
the estimate on the probability of $P(\chi_r(\oo)>x)$ made after
formula (3.7).) Moreover, $EX_u^{(i)}(\oo)\le K\e$ for all $u\ge
n_0(\e)$ and $i=1,2$, with an appropriate constant $K>0$, and as a
consequence
$$
\limsup_{n\to\infty}\frac2n\sum_{u=1}^n\(EX_u^{(1)}(\oo)+EX_u^{(2)}(\oo)
\)\le 4K\e.
$$
These relations imply formula (3.2). The Basic Lemma is proved.
\beginsection 4. The proof of Theorems 1 and 2
{\it Proof of Theorem 1.}\/ Let $\eta_n(\oo)$, $n=1,2,\dots$, be a
sequence of independent Gaussian random variables such that
$E\eta_n(\oo)=0$ and $E\eta_n^2(\oo)=\sigma_n^2$. Let us fix a number
$\e>0$. We want to construct a sequence of independent random variables
$\tilde\xi_n^{(\e)}(\oo)$, $n=1,2,\dots$, which has the same
distribution as the sequence $\xi_n(\oo)$, $n=1,2,\dots$, and the
sequences $\zeta_n(\oo)=\tilde\xi^{(\e)}_n(\oo)-\eta_n(\oo)$, and
$U_n(\oo)=\summ_{j=1}^n\zeta_n(\oo)$, $n=1,2,\dots$, satisfy
relation~(2.6) with $B_n=D_n^2=\summ_{k=1}^n\sigma_k^2$, $\alpha=2$ and
the number $\e$ we have fixed. This relation will be proved with an
application of the Basic Lemma. If we can do this for arbitrary $\e>0$,
then Theorem~4 of Part~I., recalled at the beginning of Section~2 and
the almost sure functional (central) limit theorem for the sequence
$\eta_n(\oo)$, $n=1,2,\dots$, imply Theorem~1.
We shall omit the sign ``$\,\tilde{\vphantom{\e}}\,$" and
``${}^{(\e)}$" and write $\xi_n$ instead of $\tilde\xi_n^{(\e)}$. To
apply the Basic Lemma we have to define some quantities. We fix a
sufficiently small $\bar\e=\bar\e(\e)>0$ to be defined later and define
the numbers $N(n)$, $n=1,2,\dots$, by means of the sequence $B_n=D_n^2$
as in the formulation of the Basic Lemma. Then we define an ``$\bar\e$
regular refinement" $N(n,k)$, $n=1,2,\dots$, $0\le k\le l_n$, of the
sequence $N(n)$. By this regularity property we mean that
$$
\aligned
\bar\e (B_{N(n)}-B_{N(n-1)})\le &B_{N(n,k)}-B_{N(n,k-1)}\le 3
\bar\e (B_{N(n)}-B_{N(n-1)})\\
&\qquad \text{for $n\ge n_0(\bar \e)$ and all } 1\le k\le l_n.
\endaligned \tag4.1
$$
The numbers $N(n,k)$ will be defined recursively in the
variable $k$ for fixed $n$ in the
following way. Put $N(n,0)=N(n-1)$, and if $N(n,k)$ is already defined
and $B_{N(n)}-B_{N(n,k)} >3\bar\e(B_{N(n)}-B_{N(n-1)})$, then
$$
N(n,k+1)=\min\{j\:B_j-B_{N(n,k)}\ge\bar\e (B_{N(n)}-B_{N(n-1)})\}.
$$
If $B_{N(n)}-B_{N(n,k)}\le3\bar\e(B_{N(n)}-B_{N(n-1)})$, then put
$N(n,k+1)=N(n)$. Let us remark that the Lindeberg condition (1.4)
implies that the sequence $B_n$, $n=1,2,\dots$, satisfies relation
(1.1), and $\limm_{N\to\infty}\supp_{N(n-1)\le k\le
N(n)} \dfrac{\sigma_k^2}{B_{N(n)}}=0$. Hence $\limm_{n\to\infty}2^{-n}
B_{N(n)}=1$, and $B_{N(n,k)}-B_{N(n,k-1)}\sim \bar\e
(B_{N(n)}-B_{N(n-1)})$, if $1\le k\le l_n-1$. It is not difficult
to see that the sequence $N(n,k)$ is an $\bar\e$ regular refinement of
the sequence $N(n)$.
Let $F_{n,k}(x)=P(S_{n,k}(\oo)