0$, $\eta>0$, if the numbers $\eta=\eta(\beta,l)$ and $\delta=\delta(\beta,l)$ are appropriately chosen. We have already proved in Proposition~A that $\mu(A(\beta))<1-\delta$, where $\mu$ is the (weak) limit of the measures $\mu_n$. Moreover, this statements also holds for the closure $\bar A(\beta)$ of the set $A(\beta)$ with a possibly smaller parameter $\eta$. Since $\mu_n\Rightarrow \mu$, $\limsup\limits_{n\to\infty}\mu_n(\bar A(\beta))\le \mu(\bar A(\beta))$. This implies that also the relation $\mu_n(A(\beta))<1-\delta$ holds for large $n$. Proposition~B is proved. \beginsection 6. The proof of the main results. {\it Proof of Theorem 1.}\/ By Lemma~4 $\log \left|S^{(k)}(n)\right|-\log\left|\bar S^{(k)}(n)\right|\Rightarrow 0$, where $\bar S^{(k)}(n)$ is defined in (2.12), and $\Rightarrow$ denotes stochastic convergence. Hence $S^{(k)}(n)$ can be replaced by $\bar S^{(k)}(n)$ in the proof of Theorem~1. We claim that $$ \aligned &\frac{U_1(n)}{\sqrt n}\Rightarrow 0,\quad \frac{T_1^2(n)}{\sqrt n}\Rightarrow 0\quad \text{and}\\ &\frac1{\sqrt n} \log\left|\cos\(nB_0(n)+T_0(n)-U_2(n)-\frac{\omega(n)}2\)\right| \Rightarrow0. \endaligned \tag6.1 $$ The third relation in (6.1) is needed only in the case when $0<\f(\alpha^*)<\pi$. The first two relations in (6.1) are trivial, since the random variables $U_1(n)$ and $T_1^2(n)$ are stochastically bounded. They are even stochastically convergent. The third relation holds, since the random variables $T_0(n)-U_2(n)$~mod~$2\pi$ converge in distribution to the uniform distribution in $[0,2\pi)$. Indeed, by Proposition~B the random vectors $(T_0(n),U_2(n))$ converge in distribution to a random vector $(T,U)$, where $T$ and $U$ are independent and $T$ is uniformly distributed in $[0,2\pi)$. Hence the random variables $T_0(n)-U_2(n)$~mod~$2\pi$ converge in distribution to the uniform distribution of $[0,2\pi)$, as we claimed. This relation implies that the random variables %$\log\left|\cos\(n(B_1-\alpha(n)\f)+T_1(n)-U_2(n)-\dfrac{\omega(n)}2\)\right|$ $\log\left|\cos\(n(B_0(n)-U_2(n)-\dfrac{\omega(n)}2\)\right|$ converge in distribution to a random variable $\log|\cos V|$, where $V$ is uniformly distributed in~$[0,2\pi)$. This implies that the third relation also holds in (6.1). The random variables $S_0(n)$ converge to a normal law with expectation zero and variance Var$\,\eta$, and a slight refinement of the previous argument also shows that the vectors $$ \(\log\cos\(n(B_0(n)-U_2(n)-\dfrac{\omega(n)}2\),S_0(n)\) $$ converge in distribution to a random vector $(\log \cos V, Z)$, where $V$ and $Z$ are independent random variables, $V$ is uniformly distributed in $[0,2\pi]$, and $Z$ is normally distributed with expectation zero and variance $\text{Var}\,\eta$. Relation (2.13) follows from the above observations. Because of Lemma~4, the form of $\bar S^{(k)}(n)$ defined in (2.12) and the limit behaviour of the expression in the second relation of (6.1) the sign of $S^{(k)}(n)$ also satisfies the relations given in Theorem~1. \medskip\noindent {\it Proof of Lemma 5.}\/ The random variable $\eta=\eta(\alpha^*)$ is constant if and only if $$ \xi^2+2r(\alpha^*)\xi\cos\fb(\alpha^*)=\,\text{const.}\quad\text{ with probability 1.} $$ Since $\xi$ is a non-constant random variable, and its values satisfy an equation of second order, its distribution is concentrated in two points $x_1$ and $x_2$ which satisfy the identity $x_1^2+2r(\alpha^*)x_1\cos\fb(\alpha^*)=x_2^2+2r(\alpha^*)x_2 \cos\fb(\alpha^*)$, or equivalently $x_1+x_2+2r(\alpha^*)\cos\fb(\alpha^*)=0$. In case a.) when the relation $0<\fb(\alpha^*)<\pi$ holds, by Lemma~1 the identity $E\dfrac\xi{r^2(\alpha^*)+\xi^2+2r(\alpha^*)\xi\cos\fb(\alpha^*)}=0$ must hold. This is equivalent to the relation $px_1+qx_2=0$ with $p=P(\xi=x_1)$, $q=P(\xi=x_2)=1-p$, since $r^2+x_1^2+2rx_1\cos\fb=r^2+x_2^2+2rx_2\cos\fb$ in this case. Finally, the second equation of the fixed point equation (1.4) \ $r\left.\dfrac{\partial H}{\partial r}\right|_{r=r(\alpha^*)}=\alpha^*$ yields that $E\dfrac{r\xi\cos\fb+r^2}{r^2+\xi^2+2r\xi\cos\fb}=\alpha^*$. This is equivalent to $\dfrac {r^2}{r^2-x_1x_2}=\alpha^*$, since in this case $r^2+\xi^2+2r\xi\cos\fb=r^2-x_1x_2$, as the calculation $r^2+\xi^2+2r\xi\cos\fb=r^2+(px_1^2+qx_2^2) =r^2+(px_1+qx_2)(x_1+x_2)-x_1x_2=r^2-x_1x_2$ shows. We have proved that the distribution of the random variable $\xi$ must be concentrated in two different points, and the above equations make possible to calculate $r(\alpha^*)$ and $\fb(\alpha^*)$ from $\alpha^*$. To decide whether we get a real solution for a pair $(F,\alpha^*)$ we have to check whether the condition $|\cos(\f(\alpha^*)|<1$ is satisfied. Some calculation shows that $\cos\fb(\alpha^*)=-\dfrac{x_1+x_2}{2r(\alpha^*)} =-\dfrac{(q-p)x_1}{2qr(\alpha^*)}$, $r(\alpha^*)^2=\dfrac pq\dfrac{\alpha^*}{1-\alpha^*}x_1^2$. The last two identities yield that $\cos^2\fb(\alpha^*)=\dfrac{(p-q)^2}{4pq}\dfrac{1-\alpha^*}{\alpha^*}$. This gives that the condition $|\cos\fb(\alpha^*)|<1$ is equivalent to $\alpha^*>1-4pq$. In case b.) when the relation $\fb(\alpha^*)=0$ holds the random variable $\xi$ is concentrated in two points $x_1$, $x_2$, \ $x_1+x_2+2r(\alpha^*)=0$, and $E\dfrac\xi{(r+\xi)^2}\ge0$. The latter relation is equivalent to $E\xi\ge 0$ in the present case. Since $2r(\alpha^*)=-(x_1+x_2)$ the second part of the fixed point equation (1.4) yields that $\alpha^*=E\dfrac r{r+\xi}=-\dfrac{(p-q)(x_1+x_2)}{x_1-x_2}$. The conditions $px_1+qx_2\ge0$, $x_1+x_2<0$ are satisfied. The last condition appears, because it is equivalent to $r(\alpha^*)>0$. Some calculation shows that under such conditions the relation $0<\alpha^*<1$ also holds. Case c.) in Lemma~5 when $\fb(\alpha^*)=\pi$ can be handled similarly to case~b.). Lemma~5 is proved. \medskip\noindent {\it Proof of Theorem 2.}\/ Because of Lemma~4 the random variable $S^{(k)}(n)$ can be replaced by $\bar S^{(k)}(n)$ defined in the first line of formula (2.12) in the proof of the limit theorem. Moreover, under the conditions of Theorem~2 $\sqrt{n} S_0(n)=0$, i.e.\ this term is missing from formula (2.12). Proposition~B implies that the random vectors $$ \(-U_1(n),\;nB_0+T_1-U_2-\dfrac\omega2\;\;\text{mod }2\pi\). \tag6.2 $$ converge in distribution to a random vector $(U,Z)$, where $Z=Z_1-U_2+\text{const.}\,\mod 2\pi$ with $U_2=\dfrac {-B_2(S^2-T^2)+2A_2ST}{2(A_2+B_2^2)}$, $U_1=-U=-\dfrac {A_2(S^2-T^2)+2B_2ST}{2(A_2+B_2^2)}$, $(S,T)$ is a Gaussian random vector with expectation zero and covariance matrix given in (2.14), the random variable $Z_1$ is uniformly distributed in $[0,2\pi)$, and it is independent of the vector $(S,T)$. These relations imply that the random variable $Z$ is also uniformly distributed in $[0,2\pi)$, and it is independent of the vector $(S,T)$ hence also of the random variable $U$, since its conditional distribution under the condition $S=x$, $T=y$ is the uniform distribution on $[0,2\pi)$ for all $x$ and $y$. Lemma~4 together with the convergence of the random vectors defined in (6.2) in distribution to the random vector $(U,Z)$ imply Theorem~2.\medskip\noindent {\it Proof of Theorem $2'$.}\/ Here again the investigation of the random variable $S^{(k)}(n)$ can be replaced by that of $\bar S^{(k)}(n)$ defined in the second line of formula (2.12). We are interested in the asymptotic behaviour of the expression in the exponent of this formula. We describe the central limit theorem for the random vector $(L_n^{-1}S_0(n),T_1(n))$ with the definition of an appropriate normalization $L_n$. We have $\sqrt n S_0(n)=\sum\limits_{j=1}^n(\eta^{(0)}_j-E\eta^{(0)}_j)$ with $\eta^{(0)}_j=\log |r(\alpha(n))+\xi_j|$. Under the conditions of Theorem~$2'$ $\lim\limits_{n\to\infty}\text{Var}\,\eta_j(n)=0$, but to determine the right norming $L_n$ we need a sharper estimate on this variance. To get it, observe that $r(\alpha(n))= r(\alpha^*)+(\alpha(n)-\alpha^*)r'(\alpha^*)+O\((\alpha(n)-\alpha^*)^2\)$, and since $x_1+x_2+2r(\alpha^*)=0$, $\eta_j\sim\log\left|\xi_j-\dfrac{x_1+x_2}2 +r'(\alpha^*)(\alpha(n)-\alpha^*)\right|$. Hence $\eta_j$ takes two values $y_1$ and $y_2$ with probabilities $p$ and $q$, and $|y_1-y_2|=\dfrac{4r'(\alpha^*)|\alpha(n)-\alpha^*|}{x_1-x_2}(1+o(1))$, where $x_1>x_2$. We get with the help of some calculation from the second relation in (1.4) and the relations $\fb(\alpha)=0$ in a small neighbourhood of $\alpha^*$ that $r'(\alpha^*)E\dfrac{\xi}{(r+\xi)^2}=1$. Because of this identity and the relation $x_1+x_2+2r(\alpha^*)=0$ that $r'(\alpha^*)=\dfrac {(x_1-x_2)^2}{4(px_1+qx_2)}$. Hence Var$S_0(n)=\text{Var}\,\eta_j=pq(y_1-y_2)^2\sim pq(\alpha(n)-\alpha^*)^2\dfrac{(x_1-x_2)^2}{(px_1+qx_2)^2}$. On the other hand, some calculation yields that Var$\,T_1(n)=\dfrac{(x_1+x_2)^2}{(x_1-x_2)^2}$. Since the random variables $\xi_j$ take two values, the random variables $S_0(n)$ and $T_1(n)$ are linear transform of each other. Because of the above observations and the central limit theorem the random vectors $(L_n^{-1}S_0(n),T_1(n)$ converge in distribution to a vector $\(V,\dfrac{x_1+x_2}{x_1-x_2}V\)$ with the choice $L_n=\sqrt{pq}|\alpha(n)-\alpha^*|\dfrac{x_1-x_2} {px_1+qx_2}$, where $V$ is a standard normal random variable. This limit theorem together with the form of the second line in formula (2.12) imply Theorem~$2'$. \medskip\noindent {\it Acknowledgement:}\/ The author would thank the referee for some useful remarks which helped to simplify certain technical details. In particular, the formulation and proof of Lemma~A and some simplification in the proof of Lemmas~2 and~3 were based on his ideas. \parskip=1.5pt \smallskip \noindent {\bf References:} \smallskip \item{1.)} Dynkin, E. B., Mandelbaum, A. (1983) Symmetric statistics, Poisson processes and multiple Wiener integrals. {\it Annals of Statistics\/} {\bf 11} 739--745 \item{2.)} van Ess, G. (1986) On the weak limit of elementary symmetric polynomials {\it Annals of Probability}\/ {\bf 14} No.\ 2 667--695 \item{3.)} Hal\'asz, G. and Sz\'ekely, G. J. (1976) On the elementary symmetric polynomials of independent random variables {\it Acta Math. Acad. Sci. Hungar.}\/ {\bf 28} 397--400 \item{4.)} M\'ori, T. F. and Sz\'ekely, G. J. (1982) Asymptotical behaviour of symmetric polynomial statistics {\it The Annals of Probability}\/ {\bf 10} No.\ 1 124--131 \item{5.)} Parthasarathy, K. R. Probability Measures on Metric Spaces. {\it Academic Press,}\/ New York and London, 1967 \item{6.)} Raugi, A. (1979) Th\'eor\`eme de la limite centrale pour un produit semi-direct d'un groupe de Lie r\'esoluble simplement connexe de type rigide par un groupe compact. In: H. Heyer (ed.) Probability Measures on Groups. Proceedings, Oberwolfach, 1978. (Lecture Notes in Math., vol.~706, pp.~257--324) Springer, Berlin Heidelberg New York. \bye