# The norm of products of free random variables

###### Abstract

Let denote free identically-distributed random variables. This paper investigates how the norm of products behaves as approaches infinity. In addition, for positive it studies the asymptotic behavior of the norm of , where denotes the symmetric product of two positive operators: .

It is proved that if the expectation of is 1, then the norm of the symmetric product is between and for certain constant and . That is, the growth in the norm is at most linear.

For the norm of the usual product , it is proved that the limit of exists and equals In other words, the growth in the norm of the product is exponential and the rate equals the logarithm of the Hilbert-Schmidt norm of operator X.

Finally, if is a cyclic representation of the algebra generated by , and if is a cyclic vector, then for all In other words, the growth in the length of the cyclic vector is exponential and the rate coincides with the rate in the growth of the norm of the product.

These results are significantly different from analogous results for commuting random variables and generalize results for random matrices derived by Kesten and Furstenberg.

## 1 Introduction

Suppose … are identically-distributed free
random variables. These variables are infinite-dimensional linear operators
but the reader may find it convenient to think of them as very large random
matrices. The first question we will address in this paper is how the norm
of behaves. If are all positive, then
it is natural to look also at the symmetric product operation
defined as follows: The
benefit is that unlike the usual operator product, this operation maps the
set of positive variables to itself. For this operation we can ask how the
norm of symmetric products
behaves.^{1}^{1}1The operation is neither commutative, nor associative. By
convention we multiply starting on the right, so, for example, However, this convention is not important for the
question that we ask. First, it is easy to check that has
the same spectral distribution and therefore the same norm as Second, if and are free, then the spectral
distribution of is the same as
the spectral distribution of
and therefore these two products have the same norm. In brief, if
are free, then the norm of does not
depend on the order in which are multiplied by the operation

Products of random matrices and their asymptotic behavior were originally studied by \shortciteNbellman54. One of the decisive steps was made by \shortciteNfurstenberg_kesten60, who investigated a matrix-valued stationary stochastic process , and proved that the limit of exists (but might equal ) and that under certain assumptions converges to this limit almost surely. Essentially, the only facts that are used in the proof of this result are the ergodic theorem, the norm inequality and the fact that the unit sphere is compact in finite-dimensional spaces. It is the lack of compactness of the unit sphere in the infinite-dimensional space that makes generalizations to infinite-dimensional operators non-trivial (see \shortciteNruelle82 for a generalization in the case of compact operators). More work on non-commutative products was done by \shortciteNfurstenberg63, \shortciteNoseledec68, \shortciteNkingman73, and others. The results are often called multiplicative ergodic theorems and they find many applications in mathematical physics. For example, see \shortciteNruelle84.

In this paper, we study products of free random variables. These variables are (non-compact) infinite-dimensional operators which can be thought of as a limiting case of large independent random matrices.

Suppose that are free, identically-distributed, self-adjoint, and positive. Suppose also Then we show that the norm of grows no faster than a linear function of Precisely, we find that

We are also able to show that if is not concentrated at then

For the usual products we can relax the assumption of self-adjointness. So, suppose that are free and identically-distributed but not necessarily self-adjoint. Also, we do not require that Then we show that

(1) |

Another way to describe the behavior of is to look at how the norm of a fixed vector changes when we consecutively apply free operators to it. More precisely, suppose that the action of the algebra of variables on a Hilbert space is described by a cyclic representation and that the vector is cyclic with respect to the expectation . By definition, this means that for every operator from a given algebra. Then we show that

(2) |

Note that we do not need to take the limit, since the equality holds for all

The reader may think of cyclic vectors as typical vectors. For example, if the representation is cyclic and irreducible then cyclic vectors are dense in . In colloquial terms, (1) says that for large the product cannot increase the norm of any given vector by more than And (2) says that for every cyclic vector this growth rate is achieved.

One more way to capture the intuition of this result is to write

We have shown that this limit is equal to

where is a cyclic vector. Thus, for large the product acts uniformly in all directions. Its maximal dilation as measured by has the same exponential order of magnitude as the dilation in the direction of a typical vector

It is helpful to compare these results with the case of commutative random variables. Suppose for the moment that are independent commutative random variables with positive values Then,

where the norm of a random variable is the essential supremum norm (i.e., ). Indeed, for every the measure of the set is positive. Therefore Note that and therefore the norm of free products grows more slowly than we would expect from the classical case.

Another interesting comparison is that with results about products of random matrices. Let be i.i.d. random matrices. Then under suitable conditions, exists almost surely. Let us denote this limit as \shortciteNfurstenberg63 developed a general formula for and \shortciteNcohen_newman84 derived explicit results in the case when entries of have a joint Gaussian distribution. In particular, if all entries of are independent and have the distribution then where is the digamma function (). If the size of the matrices grows () then To compare this with our results, note that if then the sequence of random matrices approximates a free random variable with the spectral distribution that is uniform inside the circle of radius For this free variable, and our theorem shows that This limit agrees with the result for random matrices. Thus, our result can be seen as a limiting form of results for random matrices.

The results regarding are also interesting.
We can associate with and probability measures
and which are called the spectral probability measures of and respectively. Then the measure is
determined only by and the measure and is called the -time
*free multiplicative convolution* of with itself:

The norm is easy to interpret in terms of the distribution Indeed, it is the smallest number such that the support of is inside the interval Therefore, the growth in measures the growth in the support of the spectral probability measure if the measure is convolved with itself using the operation of the free multiplicative convolution.

In the case of classical multiplicative convolutions of probability measures, the support grows exponentially, so that if is supported on then the measure is supported on What we have found in the case of free multiplicative convolutions is that if we fix then the support of the grows no faster than a linear function of , i.e., the support of is inside the interval with an absolute constant .

As was pointed out in the literature, a similar phenomenon occurs for sums of free random variables. The support of measures obtained by free additive convolutions grows much more slowly than in the case of classical additive convolutions. This effect was called superconvergence by \shortciteNbercovici_voiculescu95. Our finding about can be considered as a superconvergence for free multiplicative convolutions.

The rest of the paper is organized as follows. Section 2 formulates the results. Section 3 contains the necessary technical background from free probability theory. Sections 4, 5, and 6 prove the results. And Section 7 concludes.

## 2 Results

A *non-commutative probability space* is
a unital -algebra and a positive linear functional such that We will assume that the functional is
tracial, i.e., for any two operators
and from algebra . The elements of algebra
are called *random variables* and the functional is called the
*expectation*. The numbers are called *moments* of the random variable

A prototypical example of a non-commutative probability space is a group algebra. That is, for a countable group we consider the Hilbert space where is a counting measure, and consider the left action of on : if and then The elements of the group algebra are finite sums and we can extend by linearity the action of the group on to the action of the algebra on . We can additionally complete the resulting operator algebra in an appropriate topology. The expectation of an element is defined as , where is the identity of the group.

Another important example is the algebra of random matrices. The expectation of an element in this algebra is defined as where is the expectation with respect to underlying randomness and is the dimension of the random matrix. For more details about these examples the reader may consult \shortciteNhiai_petz00.

The concept of freeness substitutes for the concept of independence. Consider sub-algebras. be given. Let are elements of these sub-algebras such that

###### Definition 1

The algebras (and their elements) are
*free*, if provided that , for every and

Consider the group algebra for a free group with at least two generators. Then the operators corresponding to generators are free in the sense of the previous definition. For the algebra of large random matrices, Voiculescu proved the asymptotic freeness of two classically independent Gaussian matrices, where asymptotic means that the property in the previous definition is approached as the dimension of matrices (see \shortciteNvoiculescu91).

It turns out that many concepts of classical probability theory can be transferred to the case of free random variables. For example, for a self-adjoint variable we can define its distribution function. Indeed, if is a self-adjoint operator then by the spectral decomposition theorem it can be written as

where is a positive, projector-valued measure, i.e., a mapping that
sends sets of the real axis to orthogonal projectors This allows
definition of the *spectral measure* of which is a
measure with the following distribution function:

We can calculate the expectation of any summable function of a self-adjoint variable by using its spectral measure:

Let … be free identically-distributed
positive random variables. Consider and (by convention we multiply on
the left, so that, for example,
). We will see later that these variables have the same moments: As a first step let us
record some simple results about the expectation and variance of and
We define *variance* of a random variable as

###### Proposition 1

Suppose that are self-adjoint and Then and

Note that the linear growth in the variance of is in contrast with the classical case, where only the variance of grows linearly. We will prove this Proposition later when we have more technical tools available. Before that we are going to formulate the main results.

Let denote the usual operator norm of operator .

###### Theorem 1

Suppose that , …, are
identically-distributed positive self-adjoint free variables Suppose also
that Then

(1) there exists such a constant, that

and

(2)

For the next theorem define

###### Theorem 2

Suppose that , …, are free
identically-distributed variables (not necessarily self-adjoint) Then

(1) there exists such a constant, that

and

(2)

###### Corollary 1

Suppose that , …, are free identically-distributed variables (not necessarily self-adjoint) Then

Next, suppose that the algebra acts on an
(infinitely-dimensional) Hilbert space In other words, let be a
representation of . We call representation *cyclic *if there exists such a vector that for all operators
The vectors with this property are also called *cyclic*.

###### Theorem 3

Suppose is a cyclic representation of , is its cyclic vector, and , …, are free identically-distributed variables from . Then

###### Corollary 2

If and are cyclic then

as

## 3 Preliminaries

The *Cauchy transform* of a bounded random variable is defined as
follows:

This power series is convergent for Let us also define the *-function* of :

The -function is convergent for and it is related to the Cauchy transform by the following equality:

If is bounded and then for in a
sufficiently small neighborhood of the inverse of is defined, which we denote as
Then the *S-transform* is defined as

(3) |

Let us write out several first terms in the power expansions for and Suppose for simplicity that and let . Then,

The main theorem regarding the multiplication of free random variables was proved by \shortciteNvoiculescu87. Later the proof was significantly simplified by \shortciteNhaagerup97.

###### Theorem 4 (Voiculescu)

Suppose and are bounded free random variables. Suppose also that and Then

In particular, this theorem implies that where denotes the -transform of any of Now it is easy to prove Proposition 1. Indeed, let us denote as Then, using the power expansions we can write:

where and Then, using power expansion in (3), we conclude that Next, by definition, and Therefore, we can conclude that QED.

## 4 Proof of Theorem 1

Throughout this section we assume that are self-adjoint, and the support of the spectral distribution of belongs to

Let us first go in a simpler direction and derive a lower bound on That is, we are going to prove claim (2) of the theorem. From Proposition 1, we know that and It is clear that for every positive random variable it is true that and therefore Applying this to , we get In particular, so (2) is proved.

Now let us prove claim (1). By Theorem 4, . The idea of the proof is to investigate how behaves for small It turns out that if is of the order of , then where is a constant that does not depend on We will show that this fact implies that (i.e., the -function for ) has the convergent power series in the area and that therefore the Cauchy transform of has the convergent power series in This fact and the Perron-Stieltjes inversion formula imply that the support of the distribution of is inside

In the proof we need the result about functional inversions formulated below. By a function holomorphic in a domain, we mean a function which is bounded and differentiable in

###### Lemma 1 (Lagrange’s inversion formula)

Suppose is a function of a complex variable, which is holomorphic in a neighborhood of and has the Taylor expansion

converging for all sufficiently small Then the functional inverse of is well defined in a neighborhood of and the Taylor series of the inverse is given by the following formula:

where is a circle around in which has only one zero..

For the proof see Theorems II.3.2 and II.3.3 in \shortciteNmarkushevich77, or Section 7.32 in \shortciteNwhittaker_watson27.

###### Lemma 2

Proof:

where denotes the spectral distribution of the variable QED.

###### Lemma 3

The function is has only one zero in and if then

Proof: If then

Therefore, by Rouché’s theorem, has only one zero in this area.

If then

QED.

###### Lemma 4

If then

Proof: Using the previous lemma we can estimate :

Then

provided that QED.

###### Lemma 5

If then

Proof: Recall that Then we can write:

Then the previous lemma implies that for and we have the estimate:

Note that because Therefore,

QED.

###### Lemma 6

For all positive integer if then

Proof: Let us first prove the upper bound on The previous lemma implies that

Now let us prove the lower bound. The previous lemma implies that

In an equivalent form,

(4) |

Recall the following elementary inequality: If then

Let . Then

Substituting this in (4), we get

or

QED.

By Theorem 4, is the -transform of the variable The corresponding inverse -function is

First, we estimate .

###### Lemma 7

If then

Proof: Write

QED.

###### Lemma 8

The function has only one zero in and if then

Proof: Recall that by definition in (3), Therefore,

and by Lemma 7 we have the following estimate:

Therefore, by Rouché’s theorem, has only one zero in

###### Lemma 9

The radius of convergence of series (5) is at least

Proof: By the previous lemma, the coefficient before can be estimated as follows:

This implies that series (5) converges at least for QED.

###### Lemma 10

The support of the spectral distribution of belongs to the interval

Proof: The variable is self-adjoint and has a well-defined spectral measure, supported on the real axis. We can infer the Cauchy transform of this measure from :

Using Lemma 9, we can conclude that the power series for around converges in the area The coefficients of this series are real. Therefore, using the Perron-Stieltjes formula we conclude that is zero outside of the interval QED.

## 5 Proof of Theorem 2

The norm of the operator coincides with the square root of the norm of the operator Therefore, all we need to do is to estimate the norm of the self-adjoint operator

###### Lemma 11

For every bounded operator products and have the same spectral distribution.

Proof: Since is tracial, . Therefore, and have the same sequence of moments and, therefore, the same distribution. QED.

If two variables and have the same sequence of moments, we say that
they are *equivalent* and write In particular, two
self-adjoint bounded variables have the same spectral distribution if and
only if they are equivalent.

###### Lemma 12

Let and be three bounded operators from a non-commutative probability space . If , is free from , and is free from then , , and

Proof: Since and are free, the moments of can be computed from the moments of and The computation is exactly the same for since and are also free. In addition, we know that and have the same moments. Consequently, has the same moments as i.e., . The other equivalences are obtained similarly. QED.

###### Lemma 13

If , then . In words, if two variables are equivalent, then they have the same -transform.

Proof: From the definition of the -function, it is clear that if , then . This implies that and therefore QED.

For example, does not depend on and we will denote this function as

###### Lemma 14

If are free, then

and if are in addition identically distributed, then

Proof: We will use induction. For we have . Therefore Suppose that the statement is proved for Then

where the equivalence holds because is tracial and it is easy to check that the products have the same moments. Therefore,

by Lemmas 11 and 12. Then the inductive hypothesis implies that

We have managed to represent as and therefore all the arguments of the previous section are applicable, except that we are interested in rather than in