Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2004 Jan 7;101(3):727–731. doi: 10.1073/pnas.0305296101

Multiplier operator algebras and applications

David P Blecher 1,, Vrej Zarikian 1
PMCID: PMC321748  PMID: 14711990

Abstract

The one-sided multipliers of an operator space X are a key to “latent operator algebraic structure” in X. We begin with a survey of these multipliers, together with several of the applications that they have had to operator algebras. We then describe several new results on one-sided multipliers, and new applications, mostly to one-sided M-ideals.

1. Introduction

For every closed linear subspace X of B(H), for a Hilbert space H, D.P.B. has introduced two subalgebras

graphic file with name M1.gif

where B(X) is the space of bounded linear maps on X. The first, Inline graphic, is an operator algebra (we will define this term formally in the next section). The second, Inline graphic, is a C*-algebra and, indeed, is a W*-algebra (that is, it is an “abstract von Neumann algebra”) if X is weak* closed in B(H). The elements of these algebras are maps on X, called left multipliers of X. Of course there is a matching theory of right multipliers, but we shall avoid mentioning this because it is essentially identical (with appropriate modifications such as replacing “left” by “right” and “column” by “row,” and so on). By a one-sided multiplier we mean of course a left or a right multiplier.

The following list gives the main areas where the theory of multiplier algebras is being applied currently. We will discuss each point in greater detail below.

  1. Most importantly perhaps, the algebras Inline graphic and Inline graphic are key to “latent operator algebraic structure” in the operator space X. We discuss this in further detail in sections 3 and 4.

  2. These algebras allow one to define and develop a noncommutative M-ideal theory; this is treated in sections 5 and 6. We recall for now that “classical M-ideals” are a basic tool for Banach spaces, and they have an extensive and useful theory (1). We develop a “one-sided,” or noncommutative, generalization of M-ideals, which we call left and right M-ideals, and we show that this class includes examples that are of interest to operator algebraists. We then are able to extend much of the classical M-ideal theory to our setting.

  3. For many key examples, the spaces Inline graphic and Inline graphic do consist precisely of the maps on X of most interest. For example, if X is a C*-module (see, for example, refs. 2 and 3 for the definition and basic theory of these objects), then these spaces consist of the maps that C*-algebraists are interested in [see Example 4.1 iii]. Many of our results on left multipliers and right M-ideals may then be viewed as generalizations of important facts about C*-modules and their “dual variant” W*-modules. Thus, the multiplier algebras facilitate a generalization of some C*- and W*-module techniques to operator spaces. We will not however emphasize this aspect in this note.

  4. Left multipliers work admirably in settings involving dual spaces, unlike some earlier techniques. We will amplify this point at the end of sections 4 and 6.

The notion of left multipliers above, and the corresponding algebras Inline graphic and Inline graphic, are best viewed as a sequence of equivalent definitions, some of which we will meet in sections 3 and 4. No one is best for all purposes; each seems good for some purposes and unsuitable for others. However, one of these definitions appears to be “most useful.” Namely the unit ball of Inline graphic may be described as consisting of the linear maps T: XX satisfying

graphic file with name M11.gif [1]

(the precise statement may be found in Theorem 3.2). Because the criterion in Eq. 1 is so simple, it is easy to see that left multipliers remain left multipliers under most of the basic functional analytic operations: subspaces, tensor products, duality, interpolation, and so on. This is extremely useful in our theory and applications.

This note is addressed to a general reader with a knowledge of functional analysis; the background facts needed may be found in the first six chapters of ref. 4.

2. Operator Spaces and Operator Algebras

We begin by recalling a few basic definitions from operator space theory.

A concrete operator space is a linear subspace X of B(H), for a Hilbert space H. For such a subspace X we may view the space Mm,n(X) of m × n matrices with entries in X as the corresponding subspace of B(H(n), H(m)). Thus, this matrix space has a natural norm, for each Inline graphic. An abstract operator space is a vector space X with a norm || · ||n on Mn(X), for all Inline graphic, which is completely isometric to a concrete operator space. The latter term, complete isometry, refers to a linear map T: X → Y such that

graphic file with name M14.gif

for all Inline graphic, [xij] ∈ Mn(X). A complete contraction is defined similarly, simply replacing “=” by “≤” in the last centered equation. There is also a celebrated theorem of Ruan, characterizing operator spaces in terms of two simple conditions on the matrix norms. The interested reader should consult ref. 5 for this and for other facts listed in this section, or the texts (6, 7). D.P.B. is also completing a text on related matters with Le Merdy.

We saw above that if X is an operator space, then so is Mm,n(X). In particular, we will reserve the symbols Cn(X) and Rn(X) for Mn,1(X) and M1,n(X), respectively.

Every operator space X has a dual object that is also an operator space. Namely, we write X* for the dual Banach space of X, with the norm of a matrix [fij] ∈ Mn(X*) given by

graphic file with name M16.gif

We call X*, equipped with this “operator space structure,” a dual operator space. An early result in operator space theory, due to D.P.B. and V. I. Paulsen, and independently to E. G. Effros and Z. J. Ruan, states that the canonical map from X into X** is a complete isometry.

Every Banach space X may be made into an operator space in canonical ways. The simplest of these is called MIN(X); this is obtained by assigning the smallest matrix norms {|| · ||n}n≥2 with respect to which X becomes an operator space (note that we do not mention n = 1 here; this is just the usual norm on X). Another way of describing MIN(X) is to embed X isometrically into a commutative C*-algebra C(K) (this may be done with K = Ball(X*) for example, using the Hahn-Banach theorem). Then MIN(X) is simply X endowed with the canonical matrix norms || · ||n inherited from the C*-algebra Mn(C(K)) ≅ C(K, Mn). A very nice situation occurs when a given “noncommutative/operator space theory,” applied to MIN(X) spaces, is just the associated classical theory. We will see an illustration of this in section 5; right M-ideals of MIN(X) turn out to be precisely the classical M-ideals of X.

A concrete operator algebra is a subalgebra A of B(H). For simplicity in this exposition, we shall always assume that A also has an identity of norm 1. By an (abstract) operator algebra we will mean a unital algebra that is also an operator space, which is completely isometrically homomorphic to a concrete operator algebra. There is an operator algebraic version of Ruan's theorem, due to D.P.B., Z. J. Ruan, and A. M. Sinclair (8):

Theorem 2.1. Up to completely isometric homomorphism, the abstract operator algebras are precisely the unital algebras A, which are also operator spaces, satisfying ||1|| = 1 and ||ab||n ≤ ||a||n||b||n, for all Inline graphic and a, bMn(A).

As noted later, this theorem may be proved easily from the theory of left multipliers. Operator algebras may be regarded as the noncommutative function algebras. Indeed, note that every function algebra or uniform algebra is an operator algebra (being a subalgebra of a commutative C*-algebra). A main tool for studying a general operator algebra A is Arveson's noncommutative Shilov boundary (912). This is, loosely speaking, the smallest C*-algebra containing A: it is a quotient of any other C*-algebra containing A. Thus, for the disk algebra Inline graphic, for example, the smallest C*-algebra containing A is Inline graphic, the continuous functions on the circle. More generally, for an operator space X, the noncommutative Shilov boundary may be viewed as a smallest C*-module (or “ternary ring of operators”) containing X completely isometrically. Because we are aiming at an elementary presentation of our subject, we will not give a more precise definition of this “boundary,” referring the interested reader to refs. 12 and 13 for more details. However, it is in the background and will be mentioned, in passing, from time to time.

3. Multipliers and Algebraic Structure

With the extra structure consisting of the additional matrix norms on an operator algebra, one might expect to not have to rely as heavily on other structure, such as the product. Indeed one may ask questions like the following: Can one recover the product on an operator algebra A from its norm (or matrix norms)? Or could one describe the right ideals of A without referring to the product? The answers to these kinds of questions are essentially in the affirmative, as we shall see in the sequel, the key being the operator space multipliers mentioned section 1. These objects were introduced by D.P.B. in 1999 (see ref. 13) and independently by W. Werner but in an “ordered” context (see ref. 14, for example). Soon after the main results in ref. 13 were found, Blecher and Paulsen (15) established an alternative approach to one-sided multipliers. A year later, Blecher, Effros, and Zarikian (16) found the very practical criterion shown in Eq. 1. Each of these contributed to the growing list of equivalent definitions of one-sided multipliers.

As we said in section 1, for any operator space X we have two subalgebras Inline graphic. These can be defined in terms of the “noncommutative Shilov boundary” (13) or in terms of the “injective envelope” (15). However, for the sake of a simpler exposition we will not emphasize these approaches but, instead, will begin with the following equivalent definition of Inline graphic: namely, as the linear maps T: XX such that there exists some completely isometric embedding X Inline graphic B(K), for a Hilbert space K, and an operator aB(K), with

graphic file with name M22.gif

We define the “multiplier norm” of T to be the infimum of the quantities ||a|| for all K,a as above. Finally, the matrix norms on Inline graphic are specified by viewing a matrix t = [Tij] in Inline graphic as a map Lt from Cn(X) = Mn,1(X) to itself, via the formula Lt([xi]) = [ΣjTijxj]. It turns out that Lt is also a left multiplier, and we define Inline graphic to be the “multiplier norm” of Lt.

Proposition 3.1. For any operator space X, Inline graphic is an operator algebra.

The last result may be proved in several ways. In fact, it is more or less obvious from the equivalent definition of left multipliers in terms of the noncommutative Shilov boundary or injective envelope (13, 15). Not surprisingly, one may also prove Proposition 3.1 by using Theorem 2.1. However, one of the attractive features of the left multiplier approach is that it automatically yields theorems such as 2.1 as corollaries and, moreover, allows a relaxation of some of the hypotheses of such characterization results. This was the main motivation of D.P.B. for the introduction of operator space multipliers (13). Thus, for example, Theorem 2.1 follows in just a few lines by combining Proposition 3.1 with the following result (see the proof of theorem 1.11 in ref. 17 for details; this deduction was independently noticed by V. I. Paulsen).

Theorem 3.2. [Blecher, Effros, and Zarikian (16)] A linear map T: XX on an operator space X is in BallInline graphic) if and only if

graphic file with name M28.gif

all nInline graphic and [xij], [yij] ∈ Mn(X).

The norms in the last centered equation are just the norms in M2n,n(X). We remark that later simplifications of our original proof of this result were given by V. I. Paulsen (7), W. Werner, and D.P.B. (In fact, Theorem 3.2 was inspired by a similar theorem of W. Werner in an ordered context. Later, W. Werner (14) showed how one may view operator space multipliers within his ordered context and also how Theorem 3.2 may be deduced from his earlier result.)

The condition in Theorem 3.2 is purely in terms of the vector space structure and matrix norms on X; and yet it often encodes “operator algebraic structure.” For example, if A is an operator algebra (with an identity of norm 1), then Inline graphic completely isometrically homomorphically. In fact, the “regular representation” λ: AB(A) defined by λ(a)(b) = ab is a completely isometric homomorphism onto Inline graphic.

From this last fact, we can provide a recipe to recover the product in an operator algebra. Let us suppose that A is an operator algebra with a forgotten product. For simplicity, let us first suppose that we do remember the identity element e. Form Inline graphic using Theorem 3.2, and define θ: Inline graphic by θ(T) = T(e). Then it follows from the fact in the last paragraph that the forgotten product on A is ab = θ(θ–1(a–1(b)), for a, bA. If we have forgotten the identity element e too, then one may only hope to recover the product modulo the “unitary subgroup” of A, but this may be done in a similar manner to the previous case (see the remarks on p. 316 of ref. 16 for details).

Of course, once one has such a recipe for recovering the product, one would expect Banach–Stone-type theorems for complete isometries between operator algebras to be more or less immediate. D.P.B. has discussed this elsewhere, and of course such results all have their origin in Kadison's well known Banach–Stone theorem for isometries between C*-algebras (18).

In summary, we are clearly beginning to substantiate our statement that left multipliers are a key to “operator algebraic structure.” Another fact along these lines is that the operator module actions (in the sense of ref. 19) of an operator algebra A on an operator space X are in a bijective correspondence with completely contractive unital homomorphisms θ: Inline graphic. Indeed, if A is a C*-algebra, then the correspondence is also with unital *-homomorphisms from A into Inline graphic. This result, originally due to Blecher (see ref. 13, and see ref. 15 for an alternative approach), also follows in just a few lines from Theorem 3.2.

4. The C*-Algebra Inline graphic

As was the case for left multipliers, there are several equivalent definitions of Inline graphic. For example, it may be defined to be the “diagonal C*-algebra” of Inline graphic; that is, the span of the Hermitian elements in that operator algebra. Alternatively, although we will not use this formulation, Inline graphic may be defined to be the set of maps T: XX for which there is a map S: XX with

graphic file with name M40.gif

where the “inner product” in the last formula is the operator-valued one inherited from the inner product on a noncommutative Shilov boundary of X. If H is a Hilbert space for example, and if Hc is H thought of as a fixed column in B(H), then the inner product above on X = Hc may be taken to be the usual inner product. A similar remark holds for C*-modules, which yields the facts stated next.

Example 4.1. (i) For a Hilbert space H, Inline graphic (see the notation above).

(ii) For a C*-algebra A, Inline graphic is the usual C*-algebra multiplier algebra M(A) of A, whereas Inline graphic is the usual left multiplier algebra LM(A) (see 3.12 in ref. 20).

(iii) Generalizing examples i and ii, if Y is a C*-module, then Inline graphic is the space of bounded module maps on Y, whereas Inline graphic is the C*-algebra of adjointable maps in the usual C*-module sense (see ref. 3).

These examples show that in certain cases the spaces Inline graphic and Inline graphic do consist precisely of the maps on X of most interest. However, we must issue a warning: for a “general space,” both Inline graphic and Inline graphic may be one-dimensional! In fact, the prevalence of spaces with only “trivial” one-sided multipliers is disappointing, until one remembers that this is simply reflecting the lack of any trace of operator algebra or operator module structure in such spaces. Thus, one should not expect the classical or noncommutative L1 spaces to have any nontrivial one-sided multipliers. This is also related to a lack of “noncommutative M-structure,” as we shall see later. On the other hand, one would expect L1 to have “noncommutative L-structure.” Interestingly, classical and noncommutative Lp spaces for p ≥ 2 may have nontrivial left multipliers and nontrivial “noncommutative M-structure.” This may be seen by first noting that by Example 4.1 i above, L2 has plenty of left multipliers. So does L by the fourth last paragraph of section 3. Using Theorem 3.2, it is not hard to see that the complex interpolation method, applied to two left multipliers, produces a left multiplier on the interpolated space, which in this case is Lp (with a slightly nonstandard “operator space structure”).

A result that is extremely important in applications is the following fact from ref. 16:

Theorem 4.2. If X is a dual operator space, then Inline graphic is a W*-algebra.

This result opens another door to the use of von Neumann algebra methods in operator space theory. For example, the type decomposition, or Murray–von Neumann equivalencies, in Inline graphic will have consequences for the structure of X. At the very least, the theorem implies that for a dual space X, the algebra Inline graphic will have lots of orthogonal projections (if it is nontrivial) and that these projections may be used in the ways that von Neumann algebraists and operator theorists are familiar with. These projections are called left M-projections on X, and they form a basic ingredient in the “noncommutative M-ideal” theory presented in the last two sections.

In the remainder of this section we continue to discuss duality. We recall that operator algebra questions often come in two varieties, a norm/C*-algebraic type or a weak*/von Neumann algebraic type. One often proves a theorem of the first type by first proving an analogous result of the second type, and then deducing the desired result by going to the second dual. For function algebras and algebras of operators there is often a technical obstacle to this approach; namely, that the Shilov boundary (or injective envelope) rarely works well in tandem with duality. Although operator space multipliers may be defined in terms of the noncommutative Shilov boundary (or injective envelope), they behave extremely well with respect to duality. This is mostly because of the existence of the criterion in Theorem 3.2. For example, using this criterion it is easy to see that

graphic file with name M53.gif [2]

Using duality properties of multipliers, one may prove versions of many of the basic theorems concerning operator algebras and their modules that are appropriate to dual spaces and the weak* topology. For example, the left multiplier approach and Theorem 3.2 were key to the first proof (see ref. 17) of the following “nonselfadjoint version” of Sakai's characterization of von Neumann algebras:

Theorem 4.3. (Le Merdy–Blecher) An operator algebra A is completely isometrically isomorphic (via a weak* homeomorphism) to a weak* closed unital subalgebra of B(H), if and only if A is a dual operator space.

5. Right Ideals in Operator Spaces

The one-sided multipliers above are intimately related to the generalization of the classical M-ideal notion to “right M-ideals” in operator spaces made by Blecher, Effros, and Zarikian (16). First, we recall the classical definitions, due to Alfsen and Effros (1, 21). These are based on the notion of the ∞ direct sum of two Banach spaces; namely, the algebraic sum of the spaces, with the norm ||(x, y)|| = max{||x||, ||y||}. An M-projection on a Banach space X is an idempotent linear map P: XX such that the map from Inline graphic given by

graphic file with name M55.gif [3]

is an isometry. This is simply saying that Inline graphic. The range of an M-projection is a called an M-summand. A closed subspace J of a Banach space X is called an M-ideal if Inline graphic is the range of an M-projection on X**. These objects have an extensive and useful theory. Thus, in the text (1) the reader will find an extensive “calculus” for M-ideals, by which we mean a collection of hundreds of theorems concerning the properties of M-ideals, together with a long list of examples. For example, the M-ideals in a C*-algebra are just the closed two-sided ideals (see refs. 21 and 22). Analysts, realizing that they have encountered an M-ideal in the course of a calculation, often need only to consult such a text to obtain a desired conclusion.

It is therefore natural to ask whether there is a noncommutative, one-sided notion of M-ideals that, at the very least, has the property that the right M-ideals in MIN(X) are the classical M-ideals in X and the right M-ideals in C*-algebras are the closed right ideals. One could then hope to connect such objects to the rich existing one-sided ideal theory of C*-algebras, which is intimately related to “noncommutative topology” (see, for example, refs. 23 and 24). Our goal in the rest of this article is to describe such a notion, to relate our definition to onesided multipliers and then begin to assemble a “calculus” for one-sided M-ideals. Such a program was proposed in 2000 by E. G. Effros, and this was initiated in ref. 16 and in the Ph.D. thesis of Zarikian (25).

It turns out that there are several equivalent definitions of left M-projections on an operator space X. For example, they may be defined to be the idempotent maps P: XX such that the map in Eq. 3 is a complete isometry but now viewed as a map from X to M2,1(X) = C2(X). Alternatively, the set of left M-projections on X is exactly the set of orthogonal projections P in the C*-algebra Inline graphic. The range of a left M-projection is a right M-summand. As before, we call a closed subspace J of an operator space X a right M-ideal if Inline graphic is a right M-summand in X**.

The last few definitions are stated only in terms of the vector space structure and matrix norms. The following examples show that they often encode important algebraic information.

Example 5.1. (i) The M-ideals in a Banach space X are exactly the right M-ideals in MIN(X).

(ii) The right M-ideals in a C*-algebra are exactly the closed right ideals.

(iii) The right M-ideals in a C*-module are exactly the closed submodules.

(iv) The right M-ideals in an operator algebra are exactly the closed right ideals with a left contractive approximate identity.

As we said earlier, one would not expect spaces like L1 to have interesting one-sided M-structure. In the sequel (26) to the paper (16), we show for example that preduals of von Neumann algebras have no nontrivial one-sided M-ideals. Rather, such spaces will be interesting from the standpoint of “one-sided L-ideal” theory, where the latter is the “dual theory” to that of one-sided M-ideals.

6. The Calculus of One-Sided M-Ideals

In this section, we briefly present some new results and also describe a lengthy study that we have undertaken of one-sided M-ideals and multipliers. As mentioned in the last section, we wish to generalize the basic theorems from classical M-ideal theory, to develop a theory that will be useful for some problems in “noncommutative functional analysis.” As we shall see here, this often reduces to proving that left multipliers have certain properties, or to von Neumann algebra techniques and spectral theory [recall that Inline graphic is a von Neumann algebra if X is a dual space]. Indeed, most of the basic theory of classical M-ideals does generalize but usually requires quite different, “noncommutative” arguments! And perhaps surprisingly, the results do not become untidy in the noncommutative framework, although sometimes we have to add reasonable hypotheses. As one would expect, one also finds truly new phenomena in the new framework (one example of this is the fact that we mentioned above in Theorem 4.2 of the existence of nontrivial one-sided M-ideals in Lp spaces).

To illustrate some of the points above, consider the following result:

Theorem 6.1. The closed span in an operator algebra of a family of closed right ideals, each possessing a left contractive approximate identity for itself, is a right ideal also possessing a left contractive approximate identity.

A result like this is useful in practice, because the presence of an approximate identity is often key to calculations involving approximations. It is also interesting that this result is not true for Banach algebras, as one may easily see by testing three- or four-dimensional examples. In fact, we claim that the result is essentially a “one-sided M-ideal phenomenon.” Indeed, Theorem 6.1 is immediate from Example 5.1 iv, and the following general result:

Theorem 6.2. The closed span of a family {Jλ: λ ∈} of right M-ideals in an operator space X is a right M-ideal in X.

We sketch the proof of this theorem because it is quite instructive and shows the noncommutative nature of our arguments. By the definition of right M-ideals, and basic Banach space duality theory (passage to the second dual X**), one sees that the theorem will follow if it is the case that the weak* closed span of a family of right M-summands of a dual space Y is again a right M-summand of Y. By Theorem 4.2, Inline graphic is a W*-algebra. By basic properties of families of projections in von Neumann algebras, it is easy to reduce the theorem to the assertion that the weak* closed span of two right M-summands is again a right M-summand. In fact, a much more striking result is true: the map PP(Y) is a lattice isomorphism between the lattice of projections in the W*-algebra Inline graphic and the lattice of right M-summands of Y. The “meet” in the latter lattice is “intersection,” while the “join” is given by the weak* closed span. In particular, we claim that if P, Q are two left M-projections, then the weak* closure of P(Y) + Q(Y) is (PQ)(Y). To see this, we use the fact that PQ is the weak* limit in Inline graphic of (P + Q)1/n, which in fact is valid for any two projections P, Q in a von Neumann algebra. If x is in (PQ)(Y), one can show that x = (PQ)(x) is the weak* limit of (P + Q)1/n(x). By a spectral theory argument, one can see that (P + Q)1/n(x) is in the norm closure of P(Y) + Q(Y). Thus x, and hence (PQ)(Y), is contained in the weak* closure of P(Y) + Q(Y). The reverse containment is quite obvious.

There are two main techniques for proving results about one-sided M-ideals. We just saw an illustration of the first technique: heavy use of the fact that Inline graphic is a W*-algebra for dual spaces, and the consequent permission to use von Neumann algebra facts and spectral theory. The second main technique is to first establish a result for Inline graphic or Inline graphic, often using Theorem 3.2 for this, and then to use the fact that the projections in these algebras are just the left M-projections. We give just one illustration of this latter technique, an application to dual spaces. It follows from Eq. 2 that for any operator space X, we may regard Inline graphic as a subalgebra of Inline graphic. It is easy to see that this inclusion embeds Inline graphic as a C*-subalgebra of the W*-algebra Inline graphic. More is true if X is a dual operator space; in this case, we claim that there is a canonical conditional expectation E from the W*-algebra Inline graphic onto its subalgebra Inline graphic and, moreover, that E is weak* continuous with respect to the weak* topologies for which these algebras are W*-algebras (see Theorem 4.2). We sketch the key idea in the proof of this claim. We define E by

graphic file with name M73.gif

where Y is a predual of X and ιY is the canonical complete isometry from Y into Y**. If T ∈ BallInline graphic and x,yX, then

graphic file with name M75.gif

where the second inequality follows from Theorem 3.2. A similar argument with matrices shows that E(T) satisfies the criterion in Theorem 3.2. Thus, E is a contraction from Inline graphic into Inline graphic. It is then easy to show that E maps Inline graphic into Inline graphic. Checking the remaining assertions of the claim is then routine.

The claim established in the last paragraph has for example the following corollaries:

Corollary 6.3. The weak* closure of a one-sided M-ideal in a dual operator space X is a one-sided M-summand of X.

To see this, suppose that J is the right M-ideal, and consider the canonical left M-projection P of X** onto Inline graphic. Let Q = E(P), where E is the conditional expectation above. By a weak* continuity argument one shows that Q is a left M-projection onto the weak* closure of J.

Corollary 6.4. (Kaplansky density for M-ideals) The unit ball of a one-sided M-ideal in a dual operator space is weak* dense in the unit ball of its weak* closure.

To see this, suppose that J,P,Q are as discussed in the paragraph above. If xX is in the unit ball of the weak* closure of J, by Goldstine's theorem we may choose a net (xt) in Ball(J) converging in the weak* topology of X** to P(x). Thus, xt = Q(xt) = E(P)(xt) converges in the weak* topology of X to E(P)(x) = Q(x) = x.

These corollaries are also true for general dual Banach spaces, with “one-sided” removed throughout. This may be seen by applying these results to MIN(X) for a Banach space X. We were surprised to find no trace of such results in the M-ideal literature.

Acknowledgments

D.P.B. was supported in part by a grant from the National Science Foundation. V.Z. was supported by a National Science Foundation Vertical Integration of Research and Education Postdoctoral Fellowship.

This paper was submitted directly (Track II) to the PNAS office.

References

  • 1.Harmand, P., Werner, D. & Werner, W. (1993) Lecture Notes in Mathematics (Springer, Berlin), Vol. 1547.
  • 2.Rieffel, M. A. (1982) Proc. Symp. Pure Math. 38, 285–298. [Google Scholar]
  • 3.Lance, E. C. (1995) London Mathematical Society Lecture Notes (Cambridge Univ. Press, Cambridge, U.K.), Vol. 210.
  • 4.Kadison, R. V. & Ringrose, J. R. (1997) Fundamentals of the Theory of Operator Algebras (Am. Math. Soc., Providence, RI), Vols. 1 and 2.
  • 5.Effros, E. G. & Ruan, Z. J. (2000) Operator Spaces (Oxford Univ. Press, Oxford).
  • 6.Pisier, G. (2003) London Mathematical Society Lecture Notes (Cambridge Univ. Press, Cambridge, U.K.), Vol. 294.
  • 7.Paulsen, V. I. (2002) Cambridge Studies in Advanced Mathematics (Cambridge Univ. Press, Cambridge, U.K.), Vol. 78.
  • 8.Blecher, D. P., Ruan, Z. J. & Sinclair, A. M. (1990) J. Funct. Anal. 89, 188–201. [Google Scholar]
  • 9.Arveson, W. B. (1969) Acta Math. 123, 141–224. [Google Scholar]
  • 10.Arveson, W. B. (1972) Acta Math. 128, 271–308. [Google Scholar]
  • 11.Hamana, M. (1979) Publ. Res. Inst. Math. Sci. Kyoto Univ. 15, 773–785. [Google Scholar]
  • 12.Hamana, M. (1999) Math. J. Toyama Univ. 22, 77–93. [Google Scholar]
  • 13.Blecher, D. P. (2001) J. Funct. Anal. 182, 280–343. [Google Scholar]
  • 14.Werner, W. (2003) J. Funct. Anal., in press.
  • 15.Blecher, D. P. & Paulsen, V. I. (2001) Pac. J. Math. 200, 1–17. [Google Scholar]
  • 16.Blecher, D. P., Effros, E. G. & Zarikian, V. (2002) Pac. J. Math. 206, 287–319. [Google Scholar]
  • 17.Blecher, D. P. (2001) J. Funct. Anal. 183, 498–525. [Google Scholar]
  • 18.Kadison, R. V. (1951) Ann. Math. 54, 325–338. [Google Scholar]
  • 19.Christensen, E., Effros, E. G. & Sinclair, A. M. (1987) Inventiones Mathematicae 90, 279–296. [Google Scholar]
  • 20.Pedersen, G. K. (1979) C*-Algebras and Their Automorphism Groups (Academic, San Diego).
  • 21.Alfsen, E. M. & Effros, E. G. (1972) Ann. Math. 96, 98–173. [Google Scholar]
  • 22.Smith, R. R. & Ward, J. D. (1978) J. Funct. Anal. 27, 337–349. [Google Scholar]
  • 23.Effros, E. G. (1963) Duke Math. J. 30, 391–412. [Google Scholar]
  • 24.Akemann, C. A. (1969) J. Funct. Anal. 4, 277–294. [Google Scholar]
  • 25.Zarikian, V. (2001) Ph.D. thesis (Univ. of California, Los Angeles).
  • 26.Blecher, D. P., Smith, R. R. & Zarikian, V. (2002) J. Operator Theory, in press.

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES