\magnification=\magstep1
\hsize=16truecm
\input amstex
\parindent=20pt
\font\small=cmr8
\parskip=3pt plus 1.2pt
\TagsOnRight
\define\A{{\bold A}}
\define\BB{{\bold B}}
\define\DD{{\bold D}}
\define\K{{\bold K}}
\define\T{{\bold T}}
\define\U{{\bold U}}
\define\({\left(}
\define\){\right)}
\define\[{\left[}
\define\]{\right]}
\define\e{\varepsilon}
\define\oo{\omega}
\define\const{\text{\rm const.}\,}
\define\supp {\sup\limits}
\define\inff{\inf\limits}
\define\summ{\sum\limits}
\define\prodd{\prod\limits}
\define\limm{\lim\limits}
\define\limsupp{\limsup\limits}
\define\liminff{\liminf\limits}
\define\bigcapp{\bigcap\limits}
\define\bigcupp{\bigcup\limits}
\define\Sup{\text{\rm supp}\,}
\centerline{\bf Limit theorems on the direct product of a non-compact}
\centerline{\bf Lie group and a compact group}
\medskip
\centerline{\it P\'eter Major$^{(1)}$ and Gyula Pap$^{(2)}$}
\smallskip \centerline{\vbox{\hsize=12.5truecm\parindent=0pt
$^{(1)}$Mathematical Institute of the Hungarian Academy
of Sciences\hfill\break
Budapest, P.O.B. 127 H--1364, Hungary, e-mail:
major\@renyi.hu\hfill \vskip0truemm
$^{(2)}$Institute of Mathematics, University of Debrecen \hfill\break
P.O.B. 12 H--4010 Debrecen, Hungary, e-mail: papgy\@math.klte.hu}}
\medskip \noindent
{\narrower {\it Abstract:}\/ Let us consider a
triangular array of random vectors $(X_j^{(n)},Y_j^{(n)})$,
$n=1,2,\dots$, $1\le j\le k_n$, such that the first coordinates
$X_j^{(n)}$ take their values in a non-compact Lie group and the
second coordinates $Y_j^{(n)}$ in a compact group. Let the random vectors
$(X_j^{(n)},Y_j^{(n)})$ be independent for fixed $n$, but we do not
assume any (independence type) condition about the relation between the
components of these vectors. We show under fairly general conditions
that if both random products $S_n=\prodd_{j=1}^{k_n} X_j^{(n)}$
and $T_n=\prodd_{j=1}^{k_n} Y_j^{(n)}$ have a limit distribution, then
also the random vectors $(S_n,T_n)$ converge in distribution as
$n\to\infty$. Moreover, the non-compact and compact coordinates of a
random vector with this limit distribution are independent.\par}
\medskip
\beginsection 1. Motivations for the investigation of the problem.
The problem investigated in this work appeared as a by-product of the
investigation in paper~[3]. In that paper the limit behaviour of the
appropriate normalizations of $k$-order symmetric polynomials
$S_n^{(k)}=\summ_{1\le j_10$ two open sets $G\supset A\cup \partial A$
and $H\supset B\cup \partial B$ such that $\mu(G)<\mu(A)+\e$ and
$\nu(H)<\nu(B)+\e$. Then there
exist two continuous functions $f(\cdot)$ on the space $(X,\Cal A)$
and $g(\cdot)$ on the space $(Y,\Cal B)$ such that $0\le f(u)\le 1$
for all $u\in X$, $0\le g(v)\le 1$ for all $v\in Y$, $f(u)=1$ if
$u\in A$, $f(u)=0$ if $u\notin G$, and $g(v)=1$ if $v\in B$, $g(v)=0$
if $v\notin H$. Then
$$
\limsup_{n\to\infty} P_n(A\times B)\le \lim_{n\to\infty} \int
f(u)g(v) P_n(\,du,\,dv)\le \mu(G)\nu(H)\le (\mu(A)+\e)(\nu(B)+\e).
$$
Since the above relation holds for all $\e>0$ it implies the first
statement. The proof of the second statement is similar. Only in this
case we have to exploit that the measure of an open set can be
approximated arbitrary well by the measure of a closed
set contained in this open set. \medskip
Proposition 2.2, which together with Proposition~2.1 helps to prove
results of the type indicated in Section~1 is a generalization of
Lemma 1.5 in Raugi's paper~[6]. The proof heavily exploits Raugi's
ideas. Before the formulation of this result we recall some facts and
notations from the theory of group representations on compact groups.
The theory of group representations appears in a natural way if we want
to apply the characteristic function technique in the case of general
compact groups.
Let $\bold K$ be a compact group. A representation of the group $\bold
K$ is a continuous homomorphism of the group $\bold K$ to the group of
unitary transformations $\Cal U(H)$ of a Hilbert space $H$. We call
a representation $D\: \bold K\to \Cal U(H)$ irreducible if there is
no non-trivial closed subspace of the Hilbert space $H$ invariant with
respect to all unitary transformations $D(g)$, $g\in \bold K$. Two
representations $D_1$ and $D_2$ are called unitarily equivalent if
there is a unitary transformation $U$ of the Hilbert space $H$ such
that $U D_1(g) U^*=D_2(g)$ for all $g\in\bold K$. It follows from the
general theory of group representations that all irreducible
representations of a compact group $\bold K$ are finite dimensional,
that is they map the group to the unitary matrices of a finite
dimensional Euclidean space. Let $\text{Irr}\,(\bold K)$ denote the
class of all irreducible not unitarily equivalent representations of
the group~$\bold K$.
Given a $D\in \text{Irr}\,(\bold K)$ of dimension $d=d(D)$, let
$D(i,j)(g)$, $g\in \bold G$, $1\le i,j\le d(D)$ denote the elements
of the matrix we get if the transformations $D(g)$ are
written in the form of a matrix with a fixed orthonormal basis of
the $d$-dimensional space. By a most important result of the group
representations, the Peter--Weyl theorem, the set of functions
$\dfrac1{d(D)} D(i,j)(\cdot)$, $1\le i,j\le d(D)$, $D\in
\text{Irr}\,(\bold K)$, is a complete orthonormal basis in the space
$L_2(\bold K,\Cal K,\mu)$, where $\Cal K$ denotes the Borel
$\sigma$-algebra of the group $\bold K$ and $\mu$ is the Haar measure
in this space. Beside this, the finite linear combinations of the
functions $D(i,j)(\cdot)$ constitute an everywhere dense set in the
space of continuous functions on the compact group $\bold K$ with
respect to the supremum norm.
If $X$ is a random variable taking values in a compact group $\bold K$
then let us define its Fourier transform $\Cal F_X=\Cal F_X(D)$,
$D\in\text{Irr}\,\bold K$, as
$$
\Cal F_X(D)=ED(X(\cdot)),\quad\text{that is}\quad \langle \Cal
F_X(D)(u),v\rangle= E\langle D(X(\cdot))(u),v\rangle,\quad
D\in\text{Irr}\,\bold K
$$
if $u,v\in H(D)$, where $H(D)$ denotes the (finite dimensional)
Hilbert space where the group representation $D$ is acting. The
Fourier transform of a compact group $\bold K$ valued random variable
is the natural analog of the characteristic function of real valued
random variables. In particular, the Fourier transform of the product
of independent random variables on the group $\bold K$ equals the
product of the Fourier transforms of these random variables.
Let us also recall that given a random variable $X$ with probability
distribution $\mu$ on a separable metric space $M$, there exists
a smallest closed subset $F\subset M$ such that $\mu(F)=1$. We shall
call this set the support of the random variable $X$ and denote it
by $\Sup(X)$.
The above facts help us to prove Proposition 2.2 formulated below.
Before the proof we shall discuss the content of the conditions
imposed in this result. \medskip\noindent
{\bf Proposition 2.2.} {\it Let $\bold N$ be a locally compact and
$\bold K$ a compact group. Let $\bold G=\bold N\times\bold K$ denote
their direct product. For each $n=1,2,\dots$ let
$(X_j^{(n)},Y_j^{(n)})$, $j=1,2,\dots,k_n$, $k_n\to\infty$ if
$n\to\infty$, be a sequence of independent random variables on
$\bold G$. Let us define the random products
$$
U_n=\prod_{j=1}^{k_n}X_j^{(n)},\quad V_n=\prod_{j=1}^{k_n}Y_j^{(n)},
\qquad n=1,2,\dots,
$$
for all $n=1,2,\dots$. Let us assume that the random variables
$X_j^{(n)}$ satisfy the following condition~(i):
\item{(i)} The relation
$$
\lim_{n\to\infty} E\left|f\(\prod_{j=1}^{k_n}X_j^{(n)}\)
-f\(\prod_{j=1}^{k_n-p}X_j^{(n)}\)\right|=0 \tag2.2
$$
holds for all $p=1,2,\dots$ and continuous and bounded functions
$f(\cdot)$ on the locally compact group $\bold N$.
Let $\bold K'$ be a closed subgroup of $\bold K$ such that
$\Sup(Y_j^{(n)})$ lies in one of its two--sided cosets, $a\bold
K'=\bold K'a$ with some $a\in \bold K$ for all $n=1,2,\dots$ and
$j=1,2,\dots,k_n$, and fix some $y\in a\bold K'$. Suppose that the random
variables $Y_j^{(n)}$ and their Fourier transforms satisfy the following
condition~(ii):
\item{(ii)} If \ $V$ is a random variable in $\bold K$ with uniform
distribution on the subgroup $\bold K'$, then
$$
\lim_{p\to\infty}\sup_{k_n\geq p} \left\|\Cal
F_{vy^{-p}\prodd_{j=k_n-p+1}^{k_n}Y_j^{(n)}}(D) -\Cal F_V(D)\right\|=0
\tag2.3
$$
with the element $y\in a\bold K'$ we have fixed for all irreducible
representations $D\in\text{\rm Irr}\,(\bold K)$ and all $v\in\bold K'$.
Then the sequences $U_n$, $n=1,2,\dots$ and $y^{-k_n}V_n$,
$n=1,2,\dots$, are asymptotically independent, i.e.\ for all
continuous and bounded functions $f$ on the group $\bold N$ and
continuous and bounded functions $g$ on the group $\bold K$
$$
\lim_{n\to\infty} \( E[f(U_n)\,g(y^{-k_n}V_n)]
-E[f(U_n)]\, E[g(y^{-k_n}V_n)]\)=0 \tag2.4
$$
with the element $y\in a\bold K'$ we have fixed.}\medskip
If the conditions of Propositions 2.1 and 2.2 are satisfied then to
prove a limit theorem for the random vectors $(U_n,y^{-k_n}V_n)$,
$n=1,2,\dots$, it is enough to prove a limit theorem for the random
variables $U_n$ and $y^{-k_n}V_n$ separately. Then also the joint
distributions of these random variables converge in distribution, and
the components of the limit distributions are independent.
The condition (i) in Proposition~2.2 expresses the non-compact
character of the group $\bold N$. Its heuristic content is that by
omitting finitely many terms from the end of the product $U_n$ we make
a very small modification of this product. We shall return to the
discussion of this property in the next Section
in the formulation of Theorem~3.2.
Condition (ii) is slightly more general than the condition we need in
the sequel. It also helps to consider the case when the random
variables $y^{-k_n}V_n$ converge in distribution to the Haar measure
of a proper subgroup of the group~$\bold K$ with some appropriate
``shift"~$y^{-k_n}$. But we shall be interested mainly in the case
when the products $V_n$ converge to the Haar measure of the whole group
$\bold K$, and the ``shift" factors $y^{-k_n}$ do not appear. In this
case the factor $y^{-k_n}$ does not appear in formula (2.3), i.e.\ $y$
has to be chosen as the unit element of the group $\bold K$ in this
formula. As we shall see in the next section condition~(ii) of
Proposition~2.1 holds in a very general case. Let us also remark that
condition (2.3) is equivalent to the following formally weaker
statement:
$$
\lim_{p\to\infty}\sup_{k_n\geq p} \left\|\Cal
F_{y^{-p}\prodd_{j=k_n-p+1}^{k_n}Y_j^{(n)}}(D) -\Cal F_V(D)\right\|=0
\tag$2.3'$
$$
with an element $y\in a\bold K'$ for all irreducible representations
$D\in\text{\rm Irr}\,(\bold K)$, i.e. $v\in\bold K'$ can be replaced
by the identity of the group $\bold K$. Indeed, if $v\in \bold K$ then
$v V$ has the same distribution as $V$, $\Cal F_V(D)=\Cal F_{vV}(D)$,
$$
\Cal F_{vy^{-p}\prodd_{j=k_n-p+1}^{k_n}Y_j^{(n)}}(D)-\Cal F_V(D)=D(v)
\(\Cal F_{y^{-p}\prodd_{j=k_n-p+1}^{k_n}Y_j^{(n)}}(D) -\Cal F_V(D)\),
$$
$D(v)$ is a unitary matrix, hence relation $(2.3')$ implies relation
(2.3). We formulated our condition in the form (2.3) because this
formula can be better applied in the proof.
Before the proof of Proposition~2.2 we briefly explain its main ideas.
Because of the Peter--Weil theorem we can reduce the statement to be
proved to a relation formulated in relation~(2.5). Then we exploit
Conditions (i) and (ii) of Proposition~2.2. In an informal way
the content of Condition (i) is that a negligibly small error is
committed if finitely many terms are omitted from the end of the
products $U_n$ on the locally compact group $\bold N$. Condition (ii)
says that the behaviour of the random product $V_n$ on the compact
group $\bold K$ shows a different character. Here the product of
sufficiently many terms at the end of the product $V_n$ determines the
distribution of the random variable $V_n$ with a very good
accuracy. To get a good approximation of this distribution we have to
make sufficiently many terms but their number does not depend on the
parameter $n$. Condition~(ii) expresses this property in a rather
hidden way. It says in the language of Fourier transforms that the
product of finitely many terms at the end of the product $U_n$ is close
to the Haar measure of a subgroup of the group $\bold K$ or to some of
its shift. Then by multiplying it with the independent product from the
left we have to multiply with to get the product $V_n$ we do not
deteriorate this property.
The formal proof exploits these observations. First we show with the
help of Property (i) that by omitting finitely many terms from the end
of $V_n$ a negligible error is committed and the proof of
Propositon~2.2 can be reduced to a good bound on the expression
$\gamma(n,p)$ introduced in formula (2.8). To exploit the available
independence we make a conditioning of $\gamma(n,p)$ with respect to
the condition $U_{n,p}=u$, $V_{n,p}=v$ where $U_{n,p}$ and $V_{n,p}$
are defined in formula (2.6). The conditional expectation we have
to handle can be bounded well with the help of Condition~(ii). In the
exact proof we need uniform bounds on the conditional expectation we
have to handle with respect to the conditions. They can be proved with
the help of usual compactness arguments.
\medskip\noindent
{\it Proof of Proposition 2.2.}\/ Because of the Peter--Weyl theorem it
is enough to prove instead of formula (2.4) that
$$
\lim_{n\to\infty}(E[f(U_n)D(y^{-n}V_n)]
-E[f(U_n)]E[D(y^{-k_n}V_n)])=0 \tag2.5
$$
for all bounded and continuous functions $f$ on $\bold N$ and
irreducible representations $D\in\text{Irr}\,(\bold K)$. (Here
$D(y^{-k_n}V_n)$ is understood as a random matrix, and formula (2.5)
means that all coordinates of a matrix satisfies the corresponding
relation.)
Let us define for all $p=1,2,\dots$ and $n$ such that $k_n\ge p$ the
random products
$$
U_{n,p}=\prodd_{j=1}^{k_n-p} X^{(n)}_j\quad\text{and}\quad
V_{n,p}=\prodd_{j=1}^{k_n-p} Y^{(n)}_j. \tag2.6
$$
Then we have
$$
\|E[f(U_n)D(y^{-k_n}V_n)]-E[f(U_n)]
E[D(y^{-k_n}V_n)]\|\leq\alpha(n,p)+\beta(n,p)+\gamma(n,p), \tag2.7
$$
where
$$
\align
\alpha(n,p)&=\|E[(f(U_n)-f(U_{n,p}))D(y^{-k_n}V_n)]\|
\leq E|f(U_n)-f(U_{n,p})|,\\
\beta(n,p)&=\|E[f(U_n)-f(U_{n,p})]E[D(y^{-k_n}V_n)]\|
\leq E|f(U_n)-f(U_{n,p})|, \\
\intertext{and}
\gamma(n,p)&=\|E[f(U_{n,p})D(y^{-k_n}V_n)]
-E[f(U_{n,p})]E[D(y^{-k_n}V_n)]\|, \tag2.8
\endalign
$$
since $\|D(x)\|=1$ for all $x\in\bold K$.
Condition (i) implies that
$$
\lim_{n\to\infty}\alpha(n,p)=0,\qquad\lim_{n\to\infty}\beta(n,p)=0
\tag2.9
$$
for all $p=1,2,\dots$. On the other hand,
$$
\align
\gamma(n,p)&=\|E[f(U_{n,p})\{D(y^{-k_n}V_n)-ED(y^{-k_n}V_n)\}]\|\\
&=\|E(E\left[f(U_{n,p})\{D(y^{-k_n}V_n)-E D(y^{-k_n}V_n)\}
\mid U_{n,p},V_{n,p}\right])\|\\
&=\left\| \int H_{n,p}(u,v) \mu_{n,p}(\,du,\,dv)\right\|\\
&\leq\|f\|_\infty\, \int \left\|E\left[
D\left(y^{-k_n}v\prod_{j=k_n-p+1}^{k_n}Y_j^{(n)}\right)
-E D(y^{-k_n}V_n)\right]\right\| \nu_{n,p}(\,dv),
\endalign
$$
where $\mu_{n,p}(\cdot,\cdot)$ denotes the distribution of the vector
$(U_{n,p},V_{n,p})$, $\nu_{n,p}(\cdot)$ the distribution of the random
variable $V_{n,p}$, and
$$
\align
H_{n,p}(u,v)&=E\left[f(U_{n,p})\{D(y^{-k_n}V_n)-E D(y^{-k_n}V_n)\}
\mid U_{n,p}=u,V_{n,p}=v\right]\\
&=f(u)E\left(\left\{D\left(y^{-k_n}v\prod_{j=k_n-p+1}^{k_n} Y_j^{(n)}
\right) -E D(y^{-k_n}V_n)\right\}\)
\endalign
$$
because of our independence properties and the identity $V_n=
V_{n,p}\prodd_{j=k_n-p+1}^{k_n} Y_j^{(n)}$.
The relations $\Sup(Y_j^{(n)})\subset a\bold K'=\bold K'a$ for all $1\le
j\le k_n$ and $y\in a\bold K'=\bold K'a$ imply that $\Sup(V_{n,p})
=\Sup\left(\prodd_{j=1}^{k_n-p}Y_j^{(n)}\right)\subset a^{k_n-p}\bold
K'$ and $y^{-k_n}\in a^{-k_n}\bold K'$. Hence if $v=V_{n,p}\in \Sup
V_{n,p}$ then $y^{-k_n}v\in y^{-p}\bold K'=\bold K'y^{-p}$, and
$$
\align
\gamma(n,p) &\leq\|f\|_\infty\sup_{v\in\bold K'}\left\|E
D\left(vy^{-p}\prod_{j=k_n-p+1}^{k_n}Y_j^{(n)}\right)
-E D(y^{-k_n}V_n)\right\|\\
&=\|f\|_\infty\sup_{v\in\bold K'}\left\|\Cal
F_{vy^{-p}V_{n,p}^{-1}V_n}(D) -\Cal F_{y^{-n}V_n}(D)\right\|.
\endalign
$$
Let us take a random variable $V$ with uniform distribution on the
subgroup $\bold K'$. The last relation implies that
$$
\gamma(n,p)\leq\|f\|_\infty \sup_{v\in \bold K'}
\left\|\Cal F_{vy^{-p}V_{n,p}^{-1}V_n}(D)-\Cal F_{V}(D)\right\|
+\|f\|_\infty \|\Cal F_{V}(D)-\Cal F_{y^{-n}V_n}(D)\|. \tag2.10
$$
For all $p=1,2,\dots$ and $v\in \bold K'$ put
$$
g_p(v)=\sup_{n\: k_n\geq p}\left\|\Cal F_{vy^{-p}V_{n,p}^{-1}V_n}(D)
-\Cal F_{V}(D)\right\|.
$$
We claim that $g_p(\cdot)$ is a continuous function on the space $\bold
K'$. Indeed, if $v,v'\in \bold K'$ then
$$
\align
\left\|\Cal F_{vy^{-p}V_{n,p}^{-1}V_n}(D)-\Cal F_V(D)\right\|
&\leq\left\|\Cal F_{vy^{-p}V_{n,p}^{-1}V_n}(D)
-\Cal F_{v'y^{-p}V_{n-p}^{-1}V_n}(D)\right\|\\
&\qquad +\left\|\Cal F_{v'y^{-p}V_{n,p}^{-1}V_n}(D)-\Cal
F_V(D)\right\|\\
&\le \sup_{x\in\bold K}\|D(vx)-D(v'x)\|+g_p(v')
\endalign
$$
for all $p\le k_n$, hence
$$
g_p(v)\leq g_p(v')+\sup_{x\in\bold K}\|D(vx)-D(v'x)\|.
$$
Then because of the symmetric role of $v$ and $v'$
$$
|g_p(v)-g_p(v')|\leq\sup_{x\in\bold K}\|D(vx)-D(v'x)\|,
$$
and the function $g_p(\cdot)$ is continuous on the group $\bold K'$
because of the uniform continuity of the group representations $D\in
\text{Irr}\,\bold K$.
Because of property (ii) $\limm_{p\to\infty}g_p(v)=0$ for all
$v\in \bold K'$. Hence
$$
\bigcup_{p=1}^\infty\{v\in K'\: g_p(v)<\e\}=\bold K'
$$
for all $\e>0$, and the compactness of the group $\bold K'$ implies
that there exists an index $p(\e)$ such that
$$
\bigcup_{p=1}^{p(\e)}\{v\in \bold K'\:g_p(v)<\e\}=\bold K',
$$
that is
$$
\lim_{p\to\infty}\sup_{n\geq p}\sup_{v\in \bold K'}
\left\|\Cal F_{vy^{-p}V_{n,p}^{-1}V_n}(D)-\Cal F_{V}(D)\right\|=0.
$$
Let us also observe that by taking only $p=k_n$ instead of
$\supp_{k_n\ge p}$ in Condition~(ii) we get with the choice $v=e$, the
unit element of the group that $\limm_{n\to\infty}\|\Cal F_V(D)-\Cal
F_{y^{-k_n}V_n}(D)\|=0$.
Hence the last relation together with formula (2.10) imply that
$$
\lim_{p\to\infty}\sup_{n\geq p}\gamma(n,p)=0. \tag2.11
$$
Relations (2.7), (2.9) and (2.11) imply formula (2.5), hence
Proposition~2.2.
\beginsection 3. Limit theorems on compact groups.
The results about limit theorems on compact groups are fairly well
understood. In this Section we show that a result
of the paper of Stromberg~[7] formulated in Theorem 3.3.5 which he
called in his paper the Main Theorem has some interesting consequences.
We formulate Stromberg's result in a slightly different form.
\medskip\noindent
{\bf Proposition 3.1.} (Stromberg) {\it Let $Y_1,Y_2,\dots$ be a
sequence of independent, identically distributed random variables
on a compact group $\bold K$. Let us assume that the support of the
distribution of the random variable $Y_1$, $\Sup(Y_1)$ is not
contained in any proper closed subgroup of the group $\bold K$. Then
the random products $V_n=\prodd_{j=1}^nY_j$ converge in distribution
as $n\to\infty$ if and only if the support $\Sup(Y_1)$ of $Y_1$ is
not contained in any coset of any proper closed normal subgroup of
$\bold K$. If the limit distribution exists, then it is the Haar
measure $\mu_{\bold K}$ of the group $\bold K$.
A necessary and sufficient condition of the convergence in
distribution of the random products $V_n$ to the Haar measure
$\mu_{\bold K}$ of the group $\bold K$ can be expressed with the help
of the Fourier transform of the random variable $Y_1$ in the following
way: This convergence holds if and only if for all irreducible group
representations $D\in \text{\rm Irr}\,\bold K$ such that $D\neq D_0$,
where $D_0$ denotes the identity group representation (i.e.\
$D_0(g)=1$ for all $g\in \bold K$), the absolute values of all
eigenvalues of the Fourier transform $\Cal F_{Y_1}(D)=E D(Y_1)$ are
strictly less than~1.}\medskip
Stromberg formulated his result in the language of probability
measures instead of random variables. Beside this, he formulated a
slightly more general result, because he also discussed the case when
the smallest closed subgroup $\bold K_0$ containing the support of
the random variable $Y_1$ may be a proper subgroup of the group
$\bold K$. But it is not hard to reduce this general case to the
case described in Proposition~3.1, and actually this is done in
Stromberg's paper. Stromberg did not formulate explicitly the
statement of the second paragraph in Proposition~3.1, but he proved
it. Actually the core of the proof of the sufficiency part of the
convergence in distribution in Proposition~3.1 consists of the
verification of this statement. We formulated this statement
explicitly, because it plays an important role in our subsequent
discussion.
Let us remark that a sequence of random variables $V_n$ on a compact
group $\bold K$ converges in distribution to the Haar measure
$\mu_{\bold K}$ of this group if and only if for all irreducible
group representations $D\in\text{Irr}\,\bold K$, $D\neq D_0$, where
$D_0$ is the identity group representation, $\limm_{n\to\infty}\Cal
F_{V_n}(D)=0$. By Proposition~3.1, if these random variables are of
the form $V_n=\prodd_{j=1}^n Y_j$, where $Y_j$, $j=1,2,\dots$, are
independent, identically distributed random variables, then this
relation can hold only if the Fourier transform of $Y_1$ satisfies
the property formulated in Proposition~3.1. This fact has deep
consequences. Such consequences will be formulated in the following
Corollary of Proposition~3.1 \medskip\noindent
{\bf Corollary of Proposition 3.1.} {\it Let $Y_1,Y_2,\dots$, be a
sequence of independent, identically distributed random variables on a
compact group $\bold K$ such that the support $\Sup(Y_1)$ of the
random variable $Y_1$ is not contained in any proper closed subgroup
or any coset of any proper closed normal subgroup of $\bold K$. Then
the random products $V_n=\prodd_{j=1}^n Y_j$ converge in distribution
to the Haar measure $\mu_{\bold K}$ of the group $\bold K$, and the
random variables $Y_j$ also satisfy property (ii) formulated in
Proposition~(2.2) with $k_n=n$, $Y_j^{(n)}=Y_j$, $j=1,\dots,n$, $\bold
K=\bold K'$ and $y=e$, where $e$ is the unit element the group $\bold
K$. Also the following generalization of the above statement holds.
For all $n=1,2,\dots$, let $Y_j^{(n)}$, $j=1,\dots,n$, be a sequence
of independent, identically distributed random variables on a compact
group $\bold K$ such that the distributions of the random variables
$Y^{(n)}_1$, $\text{\rm dist}\,Y_1^{(n)}$ converge weakly to the
distribution of a random variable $Y$ whose support $\Sup(Y)$
is not contained in any proper closed subgroup or any coset of any
closed normal subgroup of $\bold K$. Then these random variables
$Y^{(n)}_j$ also satisfy property (ii) of Proposition 2.2 with $k_n=n$,
$\bold K'=\bold K$ and $y=e$, and the random products
$V_n=\prodd_{j=1}^nY^{(n)}_j$ converge weakly to the Haar measure
$\mu_{\bold K}$ of the group $\bold K$ as $n\to\infty$. These
statements also hold if we do not assume that the independent random
variables $Y_j^{(n)}$ are identically distributed, we only assume that
if $\rho$ is such a metric on the space of probability measures $\mu$
on the group $\bold K$ which metrizes weak convergence of probability
distributions on $\bold K$ (such a metric on the space of probability
measures on $\bold K$ exists if $\bold K$ is a separable metric space),
then $\limm_{n\to\infty}\supp_{1\le j\le n}\rho\,(\text{\rm
dist}\,Y_j^{(n)},\text{\rm dist}\,Y)=0$.} \medskip
This corollary states that if the products of independent and
identically distributed random variables converge to the Haar
measure of the group, then they also satisfy property~(ii) of
Proposition~2.2. Moreover, the same relation also holds for their
small perturbations.
\medskip\noindent
{\it Proof of the Corollary of Proposition 3.1.}\/ It is enough to
prove formula (2.3) with $k_n=n$ in the case $D\in\text{Irr}\,G$,
$D\neq D_0$, where $D_0$ is the identity group representation of
$\bold K$. Then in the case investigated in this corollary $\Cal
F_V(D)=0$, and $\Cal F_{vy^{-p}\prodd_{j=n-p+1}^nY_j^{(n)}}(D)=D(v)
\prodd_{j=n-p+1}^nE D\(Y_j^{(n)}\)$, and we have to show that
$$
\limm_{p\to\infty}\supp_{n\ge p}\left\|D(v)\prodd_{j=n-p+1}^n
ED\(Y_j^{(n)}\)\right\|=0.
$$
If $Y_j=Y_j^{(n)}$, $j=1,2,\dots,n$, are independent, identically
distributed random variables satisfying the conditions of the first
paragraph of this Corollary, then because of Proposition~3.1 there
exists an index $m=m(D)$ such that
$$
\left\|\prod_{j=l}^{l+m}ED\(Y_j^{(n)}\)\right\|\le\frac12 \quad
\text {if}\quad 1\le l\le l+m \le n. \tag3.1
$$
Since $\|D(v)\|=1$, $\left\|ED\(Y_j^{(n)}\)\right\|\le1$
relation (3.1) implies that
$$
\left\|D(v)\prodd_{j=n-p+1}^n ED\(Y_j^{(n)}\)\right\|\le
\(\dfrac12\)^{p/m-1}\le \e\quad \text{if}\quad p\ge p_0= p_0(D,\e)
$$
for all $n\ge p$ with an appropriate number $p_0$. Since this relation
holds for all $\e>0$ it implies condition (ii) of Proposition~2.2 if
the conditions in the first paragraph of this Corollary holds.
If the conditions of the second paragraph hold, then a slight
modification of this argument yields the proof of formula (ii).
Indeed, let $Y_j$, $j=1,\dots,n$, be a sequence of independent,
identically random variables with the same distribution as the random
variable~$Y$. Then there exists an index $n_0=n_0(m,D)$ such that
$$
\left\|\prod_{j=l}^{l+m}ED\(Y_j^{(n)}\)-
\prod_{j=l}^{l+m}ED\(Y_j\)\right\|\le\frac16\quad\text{for all}\quad
1\le l\le l+m\le n
$$
if $n\ge n_0$. This implies that a slight modification of formula
(3.1), where the upper bound $\frac12$ is replaced by $\frac23$ holds
in this case. This fact implies the validity of formula (2.3) also in
this case. Finally, formula (2.3) with the choice $v=e$ and $n=p$
instead of $\supp_{n\ge p}$ implies that $\limm_{n\to\infty}ED(V_n)=0$
if $D\neq D_0$. Hence the distributions of the random variables $V_n$
converge to the Haar measure $\mu_{\bold K}$. \medskip
The following Theorem 3.2 can be obtained as a consequence of the
already proved results. \medskip\noindent
{\bf Theorem 3.2.} {\it Let $\bold N$ be a locally compact and
$\bold K$ a compact group. Let $\bold G=\bold N\times\bold K$ denote
their direct product. Let us consider the triangular array
$(X_j^{(n)},Y_j^{(n)})$ of random variables on the group $\bold G$,
$n=1,2,\dots$, $1\le j\le k_n$, $k_n\to\infty$ if $n\to\infty$ which
are independent for a fixed $n$ for all indices $1\le j\le k_n$. Let us
define the random products
$$
U_n=\prod_{j=1}^{k_n}X_j^{(n)},\quad
V_n=\prod_{j=1}^{k_n}Y_j^{(n)}, \qquad n=1,2,\dots.
$$
Let us assume that Corollary~3.1 can be applied for the independent
$\bold K$ valued random variables $Y_j^{(n)}$, $j=1,2,\dots,k_n$, i.e.\
$\limm_{n\to\infty}\supp_{1\le j\le k_n}\rho\,(\text{\rm dist}\,
Y_j^{(n)},\text{\rm dist}\,Y)=0$ with a random variable $Y$ on the
group $\bold K$ which is not contained in any proper closed subgroup or
any coset of a closed proper normal subgroup of $\bold K$.
Let us also assume that the distributions of the random variables $U_n$
converge weakly to a probability measure $\nu$ on the group $\bold N$,
and the random variables $X^{(n)}_j$, $1\le j\le n_k$, satisfy the
following smallness property: For all fixed positive integers $j$
$X^{(n)}_{k_n-j}\Rightarrow e$ as $n\to \infty$, where $\Rightarrow$
denotes stochastic convergence, and $e$ is the unit element of the
group $\bold N$.
Then the distributions of the random vectors $(U_n,V_n)$,
$n=1,2,\dots$, converge weakly to the direct product
$\nu\times\mu_{\bold K}$ on the group $\bold G$ as $n\to\infty$,
where $\mu_{\bold K}$ denotes the Haar measure on the group $\bold
K$.}\medskip\noindent
{\it Remark:}\/ In classical limit theorems for the products of
independent random variables on a Lie group we also assume that the
random variables whose normalized products converge in distribution
satisfy the uniform smallness condition $\supp_{1\le
j\le k_n}X_j^{(n)}\Rightarrow e$, and this is an essentially
stronger condition than the condition imposed in Theorem~3.2.
\medskip\noindent
{\it Proof of Theorem 3.2.}\/ Let us first observe that to prove
Theorem~3.2 it is enough to show that under its conditions the random
variables $X_j^{(n)}$ satisfy condition (i) of Proposition~2.2. Indeed,
this relation together with the Corollary of Proposition~3.1 imply that
Proposition 2.2 can be applied, hence formula~(2.4) holds with $y=e$,
where $e$ is the unit element of the group of $\bold K$. This
relation together with the weak convergence of the random products
$U_n$ and $V_n$ imply the validity of formula (2.1) with the choice
$X=\bold N$, $Y=\bold K$ if $P_n$ is the distribution of the random
vector $(U_n,V_n)$, and the pair of measures $(\mu,\nu)$ is
replaced by the pair of measures $(\nu,\mu_{\bold K})$. Hence
Proposition~2.1 yields the desired statement.
To prove condition (i) of Proposition 2.2 observe that because of the
smallness condition imposed on the random variables $X_j^{(n)}$
in Theorem 3.2 and the continuity of multiplication and inverse
$U^{(n)}_p=\prodd_{j=k_n-p+1}^{k_n} X_j^{n} \Rightarrow e$ and also its
inverse satisfies the relation $\(U^{(n)}_p\)^{-1}\Rightarrow e$ as
$n\to \infty$ for all fixed positive integers $p$. Beside this, as the
random variables $U_n$ are weakly convergent, they are also tight,
i.e.\ for all $\e>0$ there is a compact set $K=K(\e)\subset
\bold N$ such that $P(U_n\in K)>1-\e$ for all $n\ge n_0(\e,K)$. Since
$\prodd_{j=1}^{k_n-p} X_j^{(n)}=U_n \(\prodd_{j=k_n-p+1}^{k_n}
X_j^{n}\)^{-1}$, the above relations together with the uniform
continuity of the product on a compact subset of $\bold N\times \bold
N$ imply that
$$
\rho\(\prodd_{j=1}^{k_n} X_j^{(n)},
\prodd_{j=1}^{k_n-p}X_j^{(n)}\)\Rightarrow0 \tag3.2
$$
for all fixed $p$ as $n\to\infty$. Relation (3.2) follows from the
above facts and the observation that for all $\e>0$ and compact sets
$K$ there is a number $\delta=\delta(\e,K)>0$ such that
$\rho(x,xy^{-1})\le\e$ if $x\in K$ and $\rho(y,e)<\delta$.
Since the products $\prodd_{j=1}^{k_n-p}X_j^{(n)}$ have a
limit distribution (actually we only need that these random variables
are tight) relation~(3.2) also implies that
$$
f\(\prodd_{j=1}^{k_n} X_j^{(n)}\)-f\(\prodd_{j=1}^{k_n-p}X_j^{(n)}\)
\Rightarrow0
$$
for all continuous and bounded functions $f$ on $\bold N$.
Hence relation~(2.2) holds. Theorem~3.2 is proved.
\beginsection 4. On limit theorems on Lie groups. Some open problems.
To apply Theorem 3.2 we still need some limit theorems for the
products of the (independent) elements of a row in a triangular array
of independent random variables taking values in a Lie group.
In certain applications limit theorems for normalized random products of
independent random variables are useful. Normalization of random
products $\prodd_{j=1}^nX_j$ of independent random variables $X_j$,
$j=1,2,\dots$, taking values in a Lie group means the application of a
sequence of homomorphisms $\tau_n$, $n=1,2,\dots$, of the Lie group
that is the definition of the expressions $\tau_n\(\prodd_{j=1}^n
X_j\)=\prodd_{j=1}^n\tau_n(X_j)$. If we study such normalized products
it is natural to restrict our attention to some special Lie groups to
the so-called stratified groups where the homomorphisms $\tau_n$ can
be defined in a natural way.
Fortunately, several non-trivial and useful central and other kind of
limit theorems are known both for the products of the elements in a
row of a triangular array and for the normalized products of
independent random variables which take their values in a Lie group.
Wehn [8] considered a triangular array of random variables
$X_j^{(n)}$, $1\le j\le k_n$, $k_n\to\infty$ if $n\to \infty$, taking
values in a Lie group such that the elements of the random variables
in the same row are not only independent, but also exchangeable in the
sense that $X_j^{(n)}X_k^{(n)}$ and $X_k^{(n)}X_j^{(n)}$ have the same
distribution, and gave sufficient conditions (including uniform
smallness) for the convergence of the distribution of the products
$U_n=\prodd_{j=1}^{k_n}X_j^{(n)}$ towards a Gaussian measure. (Pap
[5] proved that these conditions are also necessary under some extra
assumption.) Pap has also proved in [4] the Lindeberg theorem for a
triangular array of random variables in a stratified Lie group such
that the elements of the random variables in the same row are
exchangeable, and the limit distribution is a Gaussian measure which
is stable with respect to the natural dilations. Moreover, [4]
contains a Lindeberg theorem for the normalized products of
independent, exchangeable random variables in the Heisenberg group
such that the limit distribution is the standard Gaussian measure.
(It is not known whether this theorem can be generalized for all
stratified Lie groups.) There are some further results about
functional limit theorems on Lie groups. But since such problems do
not appear in our context we only refer to the paper Heyer and Pap
[2] and the reference list therein.
The behaviour of the simplest and best understood case the behaviour
of sums of independent real valued random variables suggests some
natural conjectures and problems whose solution seems to be hard.
We formulate some of them.
Let $X_j$, $j=1,2,\dots$, be a sequence of independent (stratified)
Lie-group valued random variables, $\tau_n$ a sequence of
homomorphisms of the Lie group such that the random variables
$\tau_n(X_j)$ satisfy the uniform smallness condition, i.e.
$$
\limm_{n\to\infty}\supp_{1\le j\le n}P(\tau_n(X_j)\notin G)=0
$$
for all open neighbourhoods $G$ of the unit element of the group, and
the sequences $\tau_n\(\prodd_{j=1}^n X_j\)$ have a limit distribution
for $n\to\infty$. We are interested when we can state that the
normalized products of a small perturbation of these random variables
also satisfy a limit theorem. More explicitly, we formulate the
following problem. Let $\bar X_j$, $j=1,2,\dots$, be a new sequence of
independent random variables on the same Lie-group which is a small
perturbation of the original sequence $X_j$, i.e.\ the sequence $\bar
X_j X_j^{-1}$ converges stochastically to the unit element of the Lie
group as $j\to\infty$. When can we state that also the products
$\tau_n\(\prodd_{j=1}^n\bar X_j\)$ or their appropriate normalizations
$\tau_n\(\prodd_{j=1}^n\bar{\bar X_j}\)$ have a limit distribution
where $\bar{\bar X_j}=\bar X_j x_j$ with an appropriate element $x_j$
of the Lie group? What can be said if the random variables $X_j$ are
not only independent but also identically distributed?
If real valued random variables are considered then the answer to the
above questions is fairly well understood. Let us consider the most
important special case when the partial sums of the independent random
variables $X_j$, $j=1,2,\dots$, divided by the square root of the
number of summands satisfy the central limit theorem. If the new
random variables $\bar X_j$ satisfy the relation
$\limm_{j\to\infty}E(\bar X_j-X_j)^2=0$ then the normalized partial
sums of the random variables $\bar X_j$ may not converge in
distribution but the normalized partial sums of their appropriate
scaling $\bar{\bar X_j}=\bar X_j-E\bar X_j$ satisfy the same central
limit theorem as the normalized partial sums of the original random
variables $X_j$. This is the reason why we asked in the case of general
Lie groups not only about the possible limit distribution of the
normalization of the products of the random variables $\bar X_j$ but
also about the limit distribution of the normalized products of their
appropriate shift $\bar{\bar X_j}=\bar X_jx_j$.
We are interested in the question how the above statement can be
generalized to the case of general Lie group valued random variables.
We can prove only some similar results in this direction if the Lie
group is special, it is a stratified Lie group, and the homomorphisms
are also special, they are the natural dilations of the group. A
similar question can be asked also in the case of other limit theorems
when the limit distribution may be a non normal law. But we cannot
handle this problem in the general case. The main cause of the
difficulty in this problem is that the notion of expected value and
variance of a Lie group valued random variable cannot be defined in
the case of general Lie groups.
Let us consider a triangular array of random variables $X_j^{(n)}$,
$1\le j\le k_n$, $k_n\to\infty$ if $n\to \infty$, taking values in a
Lie group such that the elements of the random variables in the same
row are not only independent, but also identically distributed, and
the products $U_n=\prodd_{j=1}^{k_n}X_j^{(n)}$ have a non-degenerated
limit distribution. We are interested in the question when we can
state that this convergence of the products in distribution implies
that the random variables $X_j^{(n)}$ satisfy the uniform smallness
condition, i.e.\ when the random variables $X_1^{(n)}$ converge
stochastically to the unit element of the group. It is known that if
the Lie group is the real line then this property holds. The question
arises for which Lie groups this result can be generalized. It is
natural to expect such a result for such Lie groups which have no
compact subgroup beside the trivial subgroup consisting of the unit
element of the group. But the classical proof of this result on the
real line exploits the special properties of the trigonometrical
functions, hence the proof of such a result demands new ideas.
A natural generalization of the problem investigated in this paper is
the study the condition under which the products
$\(A_n\prodd_{j=1}^{k_n}X_j^{(n)},B_n\prodd_{j=1}^{k_n}Y_j^{(n)}\)$
have a limit distribution, where $X_j^{(n)}$, $1\le j\le k_n$, is a
triangular array of random variables in a non-compact Lie group $\bold
N$, $Y_j^{(n)}$, $1\le j\le k_n$, is a triangular array of random
variables taking values in a compact group $\bold K$, the random
variables $X_j^{(n)}$, $1\le j\le k_n$, are independent for a fixed
$n$, the same relation holds for the random variables $Y_j^{(n)}$,
and finally $A_n\in\bold N$ and $B_n\in \bold K$ are appropriate
norming constants in the groups $\bold N$ and $\bold K$ respectively.
Formally this question can be reduced to the original question studied
in this paper, since we can get rid of the norming constants $A_n$ and
$B_n$ by replacing the random variables $X_n^{(n)}$ by $\bar
X_n^{(n)}=A_nX_n^{(n)}$, $X_k^{(n)}$ by $\bar X_k^{(n)}
=A_nX_k^{(n)}A_n^{-1}$ for $1\le k \le n-1$, $Y_n^{(n)}$ by $\bar
Y_n^{(n)}=B_nY_n^{(n)}$ and $Y_k^{(n)}$ by $\bar
Y_k^{(n)}=B_nY_k^{(n)}B_n^{-1}$ for $1\le k\le n-1$. However, this
observation in itself is not enough to handle the more general
problem, since the conditions are formulated for the original random
variables $X_k^{(n)}$ and $Y^{(k)}_n$ and not for their transforms
$\bar X_k^{(n)}$ and $\bar Y^{(n)}_k$. We know very little about this
problem.
\medskip\noindent{\bf References:}\medskip
\item{[1]} Billingsley, P.:
{\it Convergence of Probability Measures.\/} John Wiley $\&$ Sons Inc.
New York London Sidney Toronto. (1968)
\item{[2]} Heyer, H. and Pap, G.:
Convergence of noncommutative triangular arrays of probability
measures on a Lie group. {\it J. Theor.\ Probab.\/} {\bf 10},
1003--1052. (1997).
\item{[3]} Major, P.:
The limit behaviour of elementary symmetric polynomials of i.i.d.\
random variables when their order tends to infinity.
{\it Annals of Probability\/} {\bf 27} No.~4 1980--2010 (1999)
\item{[4]} Pap, G.:
Central limit theorems on nilpotent Lie groups. {\it Probability and
Mathematical Statistics\/} {\bf 14}, 287--312. (1993)
\item{[5]} Pap, G.:
Lindeberg--Feller theorems on Lie groups.{\it Archiv der Math.\/} {\bf
72}, 328--336. (1999)
\item{[6]} Raugi, A.:
Th\'eor\`eme de la limite centrale pour un produit semi-direct d'un
groupe de Lie r\'esoluble simplement connexe de type rigide par un
groupe compact. In: H. Heyer (ed.) Probability Measures on Groups.
Proceedings, Oberwolfach, 1978. (Lecture Notes in Math., vol.~706,
pp.~257--324) Springer, Berlin Heidelberg New York. (1979)
\item{[7]} Stromberg, K.:
Probabilities on a compact group.
{\it Trans.\ Am.\ Math.\ Soc.\/} {\bf 94}, 295--309. (1960)
\item{[8]} Wehn, D.:
Probabilities on Lie groups.
{\it Proc.\ Natl.\ Acad.\ Sci.\ USA\/} {\bf 48}, 791--795. (1962).
\bye