Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2000 Jan 18;97(2):541–546. doi: 10.1073/pnas.97.2.541

Quantum groups with invariant integrals

Alfons Van Daele 1,*
PMCID: PMC33963  PMID: 10639115

Abstract

Quantum groups have been studied intensively for the last two decades from various points of view. The underlying mathematical structure is that of an algebra with a coproduct. Compact quantum groups admit Haar measures. However, if we want to have a Haar measure also in the noncompact case, we are forced to work with algebras without identity, and the notion of a coproduct has to be adapted. These considerations lead to the theory of multiplier Hopf algebras, which provides the mathematical tool for studying noncompact quantum groups with Haar measures. I will concentrate on the *-algebra case and assume positivity of the invariant integral. Doing so, I create an algebraic framework that serves as a model for the operator algebra approach to quantum groups. Indeed, the theory of locally compact quantum groups can be seen as the topological version of the theory of quantum groups as they are developed here in a purely algebraic context.


There are several approaches to quantum groups, but all of them have a common idea. Let me first describe this idea using a simple example. Take the group of 2 × 2 matrices of the form

graphic file with name M1.gif

This group is called the ax + b-group. Consider the algebra A of polynomial functions on this group. This algebra is generated by the matrix elements, considered as functions on G. It is an abelian algebra with an identity. The group multiplication in G induces a comultiplication on this algebra of functions (we will recall the definition of a comultiplication in the first section). A quantization of the ax + b-group as above will be a deformation of this algebra (into a non-abelian one) and, if necessary, of this comultiplication.

This transition to the noncommutative setting precisely is the idea of quantization. Indeed, roughly speaking, quantizing a space is deforming an algebra of functions on this space. Therefore, a quantum group G is a quantization of G as a space keeping track of the group structure during this process. Observe that a quantum group is not a special type of a group but that it is a generalization of a group.

What I describe above is essentially the function algebra approach to quantum groups (see, e.g., ref. 1). Dual to this approach is the quantization of the enveloping algebra of the Lie algebra of a Lie group [Drinfel'd (2) and Jimbo (3)]. Here the algebra itself contains the information of the group structure, and the deformation takes place mainly on the level of the comultiplication. We will say a little more about this approach, but from now on, we will work most of the time in the function algebra setting.

Look again at the example of the ax + b-group as above. This is a locally compact group, and therefore it carries a left and a right Haar measure. Because the group is not compact, these measures are not finite. The right Haar measure is given by a−1 da db, where da and db stand for the usual Lebesgue measures on ℝ. It is obvious that the polynomial functions will not be integrable. So, by considering the algebra of polynomial functions, we get an easy comultiplication (again, see the next section) but we can not use the Haar measure. This problem will persist after quantization.

Functions on G will be integrable only if (roughly speaking) they tend to 0 fast enough as a and b go to infinity. This means that we have to consider another algebra of functions. This is rather easy to do in the nondeformed case but can get quite complicated in the deformed case (especially in this example).

These considerations take us naturally to the operator algebra approach to quantum groups. Indeed, with algebras of operators on a Hilbert space, we have the tool of spectral theory to define nice functions of self-adjoint (or more generally normal) operators. This explains why the operator algebra approach to quantum groups is almost necessary and in any case very natural when we want to study noncompact quantum groups with a Haar measure.

In ref. 4, Kustermans and Vaes go deeper into the operator algebra approach to quantum groups. In this paper, I will study multiplier Hopf *-algebras with positive integrals. This is a structure that describes quantum groups very similarly to the function algebra approach (where Hopf algebras are used), which is still of a purely algebraic nature but can be seen as a model for the more general operator algebra approach as treated by Kustermans and Vaes in ref. 4.

We do not get all locally compact quantum groups. On the other hand, the class contains the discrete and compact quantum groups, and it is self-dual in the sense of Pontryagin duality. But this is not the only remarkable fact; there is also a very rich underlying algebraic structure. All this makes it an interesting object of study, see refs. 57.

This paper is the first of two papers on the operator algebra approach to quantum groups; the second one is ref. 4. These two papers are written so as to be read independently. However, reading this paper first (which treats purely algebraic aspects) will certainly make it easier to understand the second one. Moreover, the second paper will yield a deeper understanding of the material in this first paper.

I have written this paper for the reader with a general background in mathematics; I have mathematicians as well as mathematical physicists in mind. I have chosen to be precise when formulating definitions and stating propositions and theorems. On the other hand, I am sometimes very loose when giving comments and underlying ideas.

Multiplier Hopf *-Algebras

I will begin this section by developing the notion of a comultiplication on a *-algebra. I will first consider algebras with identity, but I am mainly interested in algebras without identity, and therefore I will develop this notion later for such algebras. This nonunital case will turn out to be less obvious.

Recall that a *-algebra over ℂ is a complex vector space with an associative multiplication that is compatible with the underlying linear structure and with an involution aa*. An involution is an antilinear map satisfying a** = a and (ab)* = b*a* for all a and b in A. Because we will work with *-algebras, we will always be working over the complex numbers.

We consider the tensor product AA of A with itself. Recall that the elements in AA are linear combinations of elements ab with a, b ε A. One understands essentially by the definition of the tensor product that the expression ab is both linear in a and b. The tensor product AA again is a *-algebra for the product defined by (ab)(ab′) = aa′ ⊗ bb′ and the involution given by (ab)* = a* ⊗ b*. Similarly, we consider the triple tensor product AAA.

If A has an identity, i.e., an element 1 (necessarily unique) satisfying a1 = 1a = a for all aA, then obviously AA has an identity, namely 1 ⊗ 1.

I am now ready to say what is meant by a comultiplication on a *-algebra with identity.

1.

Definition. Let A be a *-algebra with identity. Consider a *-homomorphism Δ: A → A ⊗ A that is unital [i.e., such that Δ(1) = 1 ⊗ 1]. Assume that (Δ ⊗ ι)Δ = (ι ⊗ Δ)Δ, where ι denotes the identity map. Then Δ is called a comultiplication (or coproduct) on A.

Saying that Δ is a *-homomorphism means that it is a linear map such that Δ(ab) = Δ(a)Δ(b) for all a, b ε A and that Δ(a*) = Δ(a)*. The map Δ ⊗ ι is a map from AA to AAA, defined by (Δ ⊗ ι)(ab) = Δ(a) ⊗ b whenever a, b ε A; similarly, for ι ⊗ Δ. So the maps (Δ ⊗ ι)Δ and (ι ⊗ Δ)Δ go from A to AAA, and they must be equal. This equality is called coassociativity.

Let me illustrate this with an example, which is important because it is in fact the motivation for the above definition.

2.

Example. Let G be a finite group. Define A to be the vector space C(G) of complex functions on G. Then A becomes a *-algebra when the product is defined by (fg)(p) = f(p)g(p) whenever f, g ε A and p ε G and the involution by f*(p) = f(p) (here λ stands for the complex conjugate of the complex number λ). This algebra has an identity, namely the function with value 1 on all elements. The tensor product A ⊗ A is naturally identified with C(G × G), the space of complex functions on G × G, by (f ⊗ g)(p, q) = f(p)g(q) when f, g ε C(G) and p, q ε G. Similarly, A ⊗ A ⊗ A is identified with C(G × G × G), the space of complex functions in three variables on G.

Define Δ: A → A ⊗ A by Δ(f)(p, q) = f(pq) when p, q ε G and f ε A. It is fairly easy to verify that Δ is a *-homomorphism, and that it is unital. It is also coassociative as a consequence of the associativity of the group multiplication. Indeed, when f is in A, and p, q, r are elements in G, we have ((Δ ⊗ ι)Δ(f))(p, q, r) = Δ(f)(pq, r) = f((pq)r). Similarly, ((ι ⊗ Δ)Δ(f))(p, q, r) = f(p(qr)), so we get coassociativity.

We see from this example that the notion of a coproduct is really dual to that of a product, and the same is true for coassociativity and associativity.

Now I will pass to the nonunital case. I will begin by explaining, using the above example, the problems that we run into, and I will show how to overcome them.

Suppose that the group G is no longer finite. We can still define the algebra C(G) of all complex functions on G. The function Δ(f), defined by Δ(f)(p, q) = f(pq) is still in C(G × G), but unfortunately now C(G × G) will be much bigger than C(G) ⊗ C(G). To see the consequences of this nonfiniteness, let us look again at the example of the ax + b-group as above. Denote by f the function on G obtained by taking the upper left matrix element and by g the function given by the upper right matrix element. Then a simple calculation shows that Δ(f) = ff and Δ(g) = fg + g ⊗ 1 (one can recognize the matrix multiplication of two elements in the ax + b-group here). So in this particular case, we get functions h on the group such that Δ(h) ε C(G) ⊗ C(G). But as explained in the introduction, these functions belong to the wrong function algebra. If we want such functions h also to be integrable, we will be left only with the possibility h = 0.

A better choice here is to take for A the algebra K(G) of complex functions with finite support on G (i.e., functions that vanish outside a finite set, depending on the function). Then we can identify AA with K(G × G), the functions with finite support in two variables. Moreover C(G × G) can be characterized as the multiplier algebra M(AA) of AA. I will introduce this notion below. Just observe for the moment that the product of any function with a function of finite support will again have finite support. Then we can characterize the *-algebra C(G × G) solely in terms of K(G) itself, and it is this property that allows us to define a comultiplication on an algebra without identity, modeled on this example.

First we have to introduce the multiplier algebra M(A) of an algebra A without identity. We will need to work with algebras A with a nondegenerate product. This means that, given a ε A, we must have a = 0, if ab = 0 for all b ε A or if ba = 0 for all b ε A. This property is automatic when A has an identity. We need it to avoid completely trivial products (such as, ab = 0 for all a, b ε A). In what follows, I will always assume our algebras to have nondegenerate products.

3.

Definition. Let A be an algebra. By a multiplier of A I mean a pair 1, ρ2) of linear maps of A to itself, satisfying 1(b) = ρ2(a)b for all a, b ε A.

If x = (ρ1, ρ2) is such a multiplier, we will have aρ1(bc) = ρ2(a)bc = aρ1(b)c and because this holds for all a and the product is nondegenerate, we get ρ1(bc) = ρ1(b)c. Similarly, ρ2(cb) = cρ2(b). Therefore, we may think of ρ1 as multiplying from the left by x and of ρ2 as multiplying from the right by x. Indeed, if we denote xa = ρ1(a) and ax = ρ2(a), then the defining formula becomes a form of associativity, namely a(xb) = (ax)b. I will work further with these notations.

The set M(A) is obviously a vector space if we define the linear structure in the natural way. It is also a *-algebra when we define the product xy of two elements x, y ε M(A) by (xy)a = x(ya) and a(xy) = (ax)y and the involution by x*a = (a*x)* and ax* = (xa*)*. This algebra has an identity (the pair of identity maps), and it contains A as a two-sided ideal by associating to a ε A the multiplier of A obtained by left and right multiplication with a. In the case of the example A = K(G) of complex functions with finite support on G, one has that M(A) is naturally identified with C(G), the *-algebra of all complex functions on G.

In order to define a comultiplication here, we need to consider M(AA). This is possible because AA is an algebra with a nondegenerate product. A coproduct will be a *-homomorphism Δ : AM(AA), but we need two extra properties. The first one is to replace Δ(1) = 1 ⊗ 1 in the nonunital case, and the second one is coassociativity (Δ ⊗ ι)Δ = (ι ⊗ Δ)Δ. The problem with this last equation is that Δ ⊗ ι and ι ⊗ Δ are defined only on AA, while Δ has range in M(AA).

This hurdle is overcome using the following result.

4.

Proposition. Let A and B be *-algebras and γ: A → M(B) a *-homomorphism, such that γ(A)B = B. Then γ has a unique extension to a *-homomorphism γ from M(A) to M(B).

By γ(A)B = B, I mean that any element in B is a linear combination of elements of the form γ(a)b with a ε A and b ε B. A *-homomorphism with this property is called nondegenerate. If A and B are *-algebras with identity, nondegenerateness is the same as unitality (γ(1) = 1). If such an extension γ∼ exists, it must satisfy γ∼(x)(γ(a)b) = γ(xa)b when x ε M(A), a ε A, and b ε B. It follows that such an extension is unique when γ is nondegenerate. The existence of the extension is proven essentially by showing that the above formula can be used to define γ∼. In what follows, I will use γ also to denote the extension.

So we see that the solutions to the two problems we had are related. They yield the following.

5.

Definition. Let A be a *-algebra and Δ : A → M(A ⊗ A) a *-homomorphism. Then Δ is called a comultiplication if Δ is nondegenerate and satisfies (Δ ⊗ ι)Δ = (ι ⊗ Δ)Δ.

Now coassociativity is meaningful as Δ ⊗ ι and ι ⊗ Δ are nondegenerate homomorphisms from AA to M(AAA) (because Δ is nondegenerate) and therefore have unique extensions to homomorphisms from M(AA) to M(AAA).

Needless to say, the map Δ defined on K(G), the algebra of complex functions with finite support on G, given by Δ(f)(p, q) = f(pq), will be a comultiplication on K(G) in the sense of definition 5.

Having a good notion of a comultiplication on a *-algebra now, we are ready to quantize in some sense the structure of a discrete group. We have to find the analogue of the identity and the inverse in the group. This can be done in different ways. The option I will take here is perhaps not the most obvious one, but it turns out to be the simplest one to explain.

This quantization procedure starts from the following two observations. If G is a group, the maps (p, q) ↦ (pq, q) and (p, q) ↦ (p, pq) are one to one and onto (bijective) from G × G to itself. This is easily seen as, e.g., the map (p, q) ↦ (pq−1, q) is the inverse of the first one. But there is also some converse to this property. Namely, if G is a set with an associative product such that these two maps are bijective, then G is actually a group.

This is one observation. The other is that when f, g ε K(G), then also the functions Δ(f)(1 ⊗ g) and (f ⊗ 1)Δ(g) are in K(G × G). Indeed, we have (Δ(f)(1 ⊗ g))(p, q) = f(pq)g(q) and g will force q to lie in a finite set and f will force pq to lie in a finite set. By the group property, p will also have to lie in a finite set in order to give a nonzero outcome.

This leads to the following definition (see ref. 8).

6.

Definition. Let (A, Δ) be a *-algebra A with a comultiplication Δ. We call (A, Δ) a multiplier Hopf *-algebra if Δ(a)(1 ⊗ b) and (a ⊗ 1)Δ(b) are in A ⊗ A for all a, b ε A, and if the linear maps T1, T2 defined from A ⊗ A to itself by T1(a ⊗ b) = Δ(a)(1 ⊗ b) and T2(a ⊗ b) = (a ⊗ 1)Δ(b) are bijective.

Remark that in general Δ(A) ⊆ M(AA). Also 1 ⊗ a ε M(AA) when a ε A in the obvious way. And so Δ(a)(1 ⊗ b) ε M(AA) for all a, b ε A. The first requirement now is that we actually get an element in AA and similarly for (a ⊗ 1)Δ(b).

I have explained already why this condition is fulfilled in the motivating example K(G). In this example, the maps T1 and T2 are given on K(G × G) by the formulas (T1f)(p, q) = f(pq, q) and (T2f)(p, q) = f(p, pq), so these maps have inverses, given by (T1−1f)(p, q) = f(pq−1, q) and (T2−1f)(p, q) = f(p, p−1q), where f ε K(G × G) and p, q ε G.

Inspired by these formulas in the group case, we can define for a general multiplier Hopf *-algebra a linear map S: AA, called the antipode, such that T1−1(ab) = ((ι ⊗ S)Δ(a))(1 ⊗ b) and T2−1(ab) = (a ⊗ 1)(S ⊗ ι)Δ(b) for all a and b. These formulas have to be interpreted in the correct way. The map S is a bijective antihomomorphism (i.e., reverses the product) and satisfies S(a*) = S−1(a)*. Remark that in general S is not a *-map as one may have that S2 ≠ ι. In some sense, the antipode is the generalization of the inverse in the group. Similarly, the identity in the group can be replaced in this setting. It becomes a *-homomorphism ε: A → ℂ satisfying (ε ⊗ ι)Δ(a) = (ι ⊗ ε)Δ(a) = a. This map ε is called the counit.

In particular, when A is a multiplier Hopf *-algebra with identity, then it is a Hopf *-algebra. On the other hand, any Hopf *-algebra is a multiplier Hopf ∗-algebra, and so we have a natural generalization of the notion of a Hopf *-algebra to the case of an algebra without identity. For more details about the notions and results in this section, see refs. 7 and 8.

To finish this section, I would like to remark that most of what we do here is still true in the non *-case and for algebras over other fields. This means that interesting examples (like the so-called root of unity examples) still fit into this theory. I refer to the literature for the study of such examples (e.g., ref. 7). In this note, however, I have chosen to consider only the multiplier Hopf *-algebras with positive integrals, because those will give rise to locally compact quantum groups as developed in refs. 4, 9, and 10.

Multiplier Hopf *-Algebras with Positive Integrals

In this section, I will discuss the notion of an integral on a multiplier Hopf *-algebra which is like the Haar measure on a locally compact group. In the finite-dimensional case, such an integral will always exist. This integral will automatically be positive if the underlying *-algebra structure is good (in the sense that it is an operator algebra), so it makes sense to call such a finite-dimensional multiplier Hopf *-algebra a finite quantum group. However, in general, an integral does not always exist. Therefore, it does not seem appropriate to consider multiplier Hopf *-algebras in general as quantized groups. Instead, as we will see later in this section, it makes much more sense to use this name only in the presence of integrals. A similar point of view is taken for locally compact quantum groups in ref. 4.

Let me begin by formulating the correct definition of an integral and comment on it afterwards. Throughout this section, fix a multiplier Hopf *-algebra (A, Δ).

First observe the following. If ω is a linear functional on A, we can define for all a ε A an element x in M(A) by

graphic file with name M2.gif
graphic file with name M3.gif

where b ε A. This is well defined, because both Δ(a)(b ⊗ 1) and (b ⊗ 1)Δ(a) are in AA, and we can apply ι ⊗ ω mapping AA to A ⊗ ℂ, which is naturally identified with A itself. We denote this element x by (ι ⊗ ω)Δ(a). Similarly, we can define (ω ⊗ ι)Δ(a).

Then the following definition makes sense.

7.

Definition. A linear functional ϕ on A is called left invariant if (ι ⊗ ϕ)Δ(a) = ϕ(a)1 for all a ε A. A left integral on A is a nonzero left invariant functional on A. Similarly, a nonzero linear functional ψ satisfying (ψ ⊗ ι)Δ(a) = ψ(a)1 for all a ε A is called a right integral.

Let me first of all illustrate this definition by looking at the example associated with a group. If we let ϕ(f) = ΣqεG f(q), we get a nonzero linear functional (the sum is well defined because only finitely many entries are nonzero). Because Δ(f)(p, q) = f(pq), we see that ((ι ⊗ ϕ)Δ(f))(p) = ΣqεG f(pq). Now, for a group, the map qpq is bijective for all p ε G. Hence ΣqεG f(pq) = ΣqεG f(q) for all p ε G. This means precisely that (ι ⊗ ϕ)Δ(f) = ϕ(f)1. Hence, ϕ is a left integral on K(G).

In this example, the left integral is also right invariant (because qqp is also bijective). This, however, is no longer true in general.

We now collect the various properties that can be proven about integrals on multiplier Hopf *-algebras (see, e.g., ref. 6).

8.

Theorem. Let (A, Δ) be a multiplier Hopf *-algebra with a left integral ϕ. Any other left integral is a scalar multiple of ϕ. There is also a right integral, unique up to a scalar. There is a multiplier δ in M(A) such that (ϕ ⊗ ι)Δ(a) = ϕ(a)δ for all a. The integral is faithful in the sense that when a ε A, then a = 0 if ϕ(ab) = 0 for all b or ϕ(ba) = 0 for all b. Similarly the right integral is faithful. Finally, there is an automorphism σ of A such that ϕ(ab) = ϕ(bσ(a)) for all a, b ε A.

There are more results (and more data) about left and right integrals. I will say a little more later. But first, let me comment on the properties given in the theorem. There is a lot to say about these already.

The first statement is that integrals are unique if they exist. Existence of a left integral implies existence of a right one and vice versa, and uniqueness is of course up to a scalar.

We also have faithfulness. When ϕ is positive, i.e., ϕ(a*a) ≥ 0 whenever a ε A, then it will follow from the Cauchy–Schwarz inequality that ϕ is faithful (in the above sense) if, and only if, ϕ(a*a) = 0 implies a = 0. That ϕ is faithful means in a way that it is far from being trivial, and that it takes care of all of A.

The faithfulness of ϕ is related with the injectivity of the map T2 as defined in the previous section by T2(ab) = (a ⊗ 1)Δ(b). Indeed, suppose that Σ(ai ⊗ 1)Δ(bi) = 0. Multiply from the right with Δ(x) and apply ι ⊗ ϕ. This will give Σ aiϕ(bi x) = 0 for all x ε A. It will follow from the faithfulness of ϕ that Σ aibi = 0. Similarly, the injectivity of T1 would be a consequence of the existence of a faithful right integral.

Now we come to the existence of the multiplier δ. This is an object that is trivial in the group example, because there the left integral is also right invariant. It is, however, similar to the modular function of a nonunimodular locally compact group relating the left and the right Haar measures. Still, the fact that δ may be nontrivial is because of the nonabelianess of the underlying algebra. Indeed, we may already get nontrivial δ for the so-called discrete quantum groups, whereas of course discrete (locally compact) groups are always unimodular.

One can show that δ is an invertible element in M(A) and that (ι ⊗ ψ)Δ(a) = ψ(a−1 for the right integral ψ. We also have Δ(δ) = δ ⊗ δ, which is the analogue of the property for a locally compact group, saying that the modular function is a group homomorphism.

Finally, consider the automorphism σ and first observe that it need not be a *-automorphism. We also have an automorphism σ′ satisfying ψ(ab) = ψ(bσ′(a)) for all a, b ε A. Again, these two automorphisms are trivial in the group example. Moreover, now there is no analogue in the locally compact group case. The nontriviality of σ can occur only when A is not abelian. Indeed, if A is abelian, ϕ(ab) = ϕ(bσ(a)) = ϕ(σ(a)b), and the faithfulness of ϕ would give σ(a) = a. More generally, if ϕ is a trace (that is if ϕ(ab) = ϕ(ba) for all a, b), then again σ is trivial.

The automorphism σ is very useful, because in many ways it helps to deal with problems that arise from the fact that the underlying algebra is not abelian. The formula ϕ(ab) = ϕ(bσ(a)) is familiar for operator algebraists and mathematical physicists. It should be recognized as an algebraic form of the Kubo Martin Schwinger condition. In fact, it is tightly related with this property, but it would take us too long to go into this.

There is one more object with the theory. If S is the antipode (recall that the antipode is the analogue of the inverse in the group) then S2 is an automorphism of A, leaving Δ invariant (in the sense that Δ(S2(a)) = (S2S2)Δ(a) for all a ε A). It follows easily that ϕ○S2 is again a left integral and by uniqueness, there is a scalar τ ε ℂ, such that ϕ(S2(a)) = τϕ(a) for all a ε A. Again, in the group case, τ = 1 because there S2 = ι, but in general it may be that τ ≠ 1 (see also a further remark). This number has modulus one in the case of a positive invariant integral on a multiplier Hopf *-algebra.

There are many relations among these various data. One has, e.g., that Δ(σ(a)) = (S2 ⊗ σ)Δ(a) for all a. Another property is that σ(δ) = τ−1δ. The automorphisms σ and σ′ are related by the formula σ′(a) = δσ(a−1.

Now I will formulate the duality for multiplier Hopf *-algebras with integrals. We first define the dual object.

9.

Definition. Let (A, Δ) be a multiplier Hopf *-algebra with a left integral ϕ. Denote by  the space of linear functionals on A of the form x ↦ ϕ(xa), where a ε A.

It is clear that  is a subspace of the vector space of all linear functionals on A. By the faithfulness of ϕ, it separates points of A (that is, if x, y ε A and xy, there is a ω ε  such that ω(x) ≠ ω(y)). And, of course, it is independent of the choice of ϕ. In fact, it follows from the existence of the automorphism σ that  is also the space of functionals of the form x ↦ ϕ(ax), where a runs through A. Finally, one can show that using a right integral instead of a left one would get the same space of functionals.

Now, we have the following duality theorem.

10.

Theorem. Let (A, Δ) be a multiplier Hopf *-algebra with integrals. Then  can be made into a *-algebra, and there exists a coproduct Δ̂ on Â, making (Â, Δ̂) into a multiplier Hopf *-algebra with integrals. The dual of (Â, Δ̂) is canonically isomorphic with (A, Δ).

I will now say more about the construction of this dual. First, the space  is made into an algebra by defining the product ω1ω2 of elements in  by (ω1ω2)(x) = (ω1 ⊗ ω2)Δ(x). Writing, e.g., ω2 = ϕ(⋅ a), we see that (ω1 ⊗ ω2)Δ(x) = (ω1 ⊗ ϕ)(Δ(x)(1 ⊗ a)), and this is well defined as Δ(x)(1 ⊗ a) ε AA. Then one must show that ω1ω2 is again in Â. The associativity of the product is a consequence of the coassociativity of the coproduct. The product can be shown to be nondegenerate.

Next, one has to define the involution in Â. This is done by putting ω*(x) = ω(S(x)*), where S is the antipode and λ̄ denotes the complex conjugate of the complex number λ. Again, one must show that ω* ε  when ω ε Â. Then  is a *-algebra.

The definition of the coproduct is more subtle. In fact, given ω1, ω2 ε Â, one can define the elements Δ̂(ω1)(1 ⊗ ω2) and (ω1 ⊗ 1)Δ̂(ω2) by

graphic file with name M4.gif
graphic file with name M5.gif

for a, b ε A. One can show that it is really possible to define Δ̂ this way, and that it has the right properties. This makes  into a multiplier Hopf *-algebra.

Finally, we have to define the integral on Â. It is shown that a right integral ψ̂ can be defined by letting ψ̂(ω) = ε(a) when ω = ϕ(⋅ a) where, as before, ϕ is the left integral on A, and ε is the counit (as introduced at the end of the previous section).

One can show that ψ̂(ω*ω) = ϕ(a*a) when ω = ϕ(⋅ a) as before. In some sense, the functional ϕ(⋅ a) is the Fourier transform of the element a, and we get the Plancherel formula here, which implies that ψ̂ is positive when ϕ is positive. We have to mention here that, when there is a positive left integral, there is also a positive right integral. When S2 = ι, this is easily seen as ϕ○S will be such a right integral and positivity follows from ϕ(S(a*a)) = ϕ(S(a)S(a)*), but we need S(a*) = S(a)*, which is true when S2 = ι, If S2 ≠ ι, the argument gets much more involved. One has to use operator algebra techniques to prove the existence of such a positive right integral. For more details about this section, we refer to ref. 6; see also refs. 5 and 7.

An Example

In this section, we will develop an example. We will use this to illustrate some aspects of the theory, both as we described it in the previous sections and as it is found in other approaches in the literature.

We start with the enveloping algebra 𝒰 of the Lie algebra of SU(2). This Lie algebra is familiar to both mathematicians and physicists and therefore is a good starting point.

This algebra 𝒰 is an algebra with identity and is generated by elements e, f, and h, satisfying heeh = 2e, hffh = −2f and effe = h. The involutive structure we consider is given by e* = f and h* = h. We have the following result.

11.

Proposition. There is a comultiplication Δ on 𝒰 given by Δ(x) = x ⊗ 1 + 1 ⊗ x when x is e, f, or h. This makes (𝒰, Δ) into a Hopf *-algebra.

The proof of this result is not so difficult. First, one must show that Δ is well defined when given on the generators, as above. To do this, one just has to show that the candidates for Δ(e), Δ(f), and Δ(h) satisfy the same commutation rules in 𝒰 ⊗ 𝒰 as e, f, and h, respectively, in 𝒰. To prove that we do have a Hopf algebra, one shows that the antipode and counit are defined by ε(x) = 0 and S(x) = −x when x is e, f, or h.

In fact, for any Lie algebra, the enveloping algebra is a Hopf algebra when Δ is defined on the elements x of the Lie algebra by Δ(x) = x ⊗ 1 + 1 ⊗ x.

Notice that in this special case, the algebra is nontrivial, whereas the comultiplication is trivial in the sense that it contains no essential information. This is what we will change next. We will deform the comultiplication. To be able to do this in our setting, we also need to change the appearance of the algebra a little. We will take a real number t ≠ 0 and consider formally q = exp(1/2th). We put λ = et. This takes us to the algebra with identity B generated by elements e, f, and q, with q invertible, satisfying qe = λeq, qf = λ−1fq and effe = (λ − λ−1)−1(q2q−2). We consider the *-structure given by e* = f and q* = q. This algebra is, in some sense, a deformation of the previous one. Again we get a Hopf *-algebra:

12.
Proposition. There is a comultiplication Δ on B defined by
graphic file with name M6.gif
graphic file with name M7.gif
graphic file with name M8.gif
making (A, Δ) into a Hopf *-algebra.

The proof of this result goes in very much the same way as that of proposition 11. The calculations are a bit more involved. Now the counit is given by ε(e) = ε(f) = 0 and ε(q) = 1, whereas the antipode is given by S(e) = −λ−1e, S(f) = −λf and S(q) = q−1.

Here I am ready to give some more comments about the different approaches to quantum groups. What I describe in proposition 12 is the quantization of the Lie algebra of SU(2) in the Jimbo approach (3). If q is considered as a formal power series in the variable t with elements in the algebra generated by e, f, and h, we would find ourselves in the Drinfel'd approach (2). Of course, these two approaches are closely related. The main contribution to the theory of quantum groups by both Drinfel'd and Jimbo was to show that a deformation as the one above in the case of the Lie algebra of SU(2) was possible for all the classical semisimple Lie algebras.

The Hopf *-algebra in proposition 12 does not admit positive integrals, unfortunately. Consider, e.g., the element q2. Then Δ(q2) = q2q2. If ϕ is a left invariant functional, we have ϕ(q2)1 = q2ϕ(q2). This implies ϕ(q2) = ϕ(q*q) = 0, which contradicts the faithfulness of the integral.

In order to produce an example with integrals from this, in some sense, we have to take functions of the elements to make them integrable. This statement seems rather vague, but it can be made precise. The idea is to represent the algebra A by operators on a Hilbert space and to use spectral theory to produce functions of certain of these operators.

In this example, this procedure goes as follows. The first thing to do is to construct *-representations of B. Now it turns out that, just as for the Lie algebra of SU(2), we have irreducible *-representations πn labeled by n = 0, 1/2, 1, 3/2, … on a space Vn of dimension 2n + 1, respectively. If we denote a basis in Vn by {ξ(n, i)|i = −n/2, −n/2 + 1, … , n/2} we get the following action:

graphic file with name M9.gif
graphic file with name M10.gif
graphic file with name M11.gif

where the ri are properly chosen real numbers. We have πn(e)ξ(n, i) = 0 if i = n/2 and πn(f)ξ(n, i) = 0 if i = −n/2.

By taking functions of the self-adjoint and commuting elements q and fe with finite support in the joint spectrum of these operators, we see that we can select each of these representations. That is the main property that makes the following result possible.

13.

Proposition. Denote by A the set of sequences (xn)n, where n runs over the numbers 0, 1/2, 1, 1 1/2, … as before, where x is a 2n + 1-dimensional matrix over ℂ, and where only finitely many elements xn are nonzero. Then A is a *-algebra (without identity) when the product and involution are defined pointwise by the product and (usual) involution of matrices. There is a *-homomorphism γ of B in M(A) given by γ(a)x = (πn(a)xn)n. This *-homomorphism is nondegenerate.

The comultiplication Δ on B can also be pulled down to A in the right way:

14.

Proposition. There is a comultiplication Δ on A such that the extension of Δ to M(A) coincides on γ(B) with the original comultiplication given on B. The pair (A, Δ) is a multiplier Hopf *-algebra.

The proof of this fact is not at all trivial. One possibility is to use the property that Δ provides a rule to construct the tensor product of two representations of B. Such a tensor product can be written as a direct sum of irreducible representations. These decompositions must be the same for the algebra A, and this in turn will essentially give Δ on A. Such a procedure is closely related to the Tannaka Krein duality. In fact, it is easier, although less direct, to use the duality of the Hopf *-algebra B with the quantum SU(2) function algebra and then use the representation theory of the quantum SU(2).

The same idea is behind the construction of the left and right integrals. Consider the one-dimensional representation π0. This representation satisfies π0(q) = 1 and π0(e) = π0(f) = 0, and so it coincides with the counit ε on B. Therefore, the identity in this component gives a self-adjoint idempotent h ε A with the property that γ(a)h = ɛ(a)h when a ε B. This means that h is a left integral on the dual Â. Because h is self-adjoint, we must also have hγ(a) = ɛ(a)h so that h is also a right integral. The existence of such an element h is typical for discrete quantum groups. In this case, the left integral ϕ on A satisfies (ι ⊗ ϕ)Δ(h) = ϕ(h)1, and this formula can be used to construct ϕ. Similarly, (ψ ⊗ 1)Δ(h) = ψ(h)1 for the right integral will allow construction of ψ. We find the following expressions.  15. Proposition. The left integral ϕ and the right integral ψ on (A, Δ) are given by

graphic file with name M12.gif
graphic file with name M13.gif

where cn = (λn+1 − λ−n−1)/(λ − λ−1), and where n = 0, 1/2, 1, … as before, and k runs from −n/2 up to n/2 with steps 1.

It is constructive to calculate the different data here and to verify some of the formulas relating them. The modular element δ satisfying ψ(x) = ϕ(xδ) for all x is given by δ = (πn(q4))n. A simple calculation shows that σ is determined by σ(q) = q, σ(e) = λ−2e and σ(f) = λ2f. Also, σ′(q) = q and σ′(e) = λ2e and σ′(f) = λ−2f. Then one can easily check the formula δσ(x) = σ′(x)δ. When x = e, we indeed have q4−2e) = (λ2e)q4 as qe = λeq.

In this example, we have different left and right integrals; also, the modular element is nontrivial. The algebra is not abelian and the integrals are not traces, so that the automorphisms σ and σ′ are also nontrivial. Finally, observe that in this example S2 ≠ ι, but that we have τ = 1. At the moment, it is not yet clear whether multiplier Hopf *-algebras with positive integrals exist where τ is nontrivial.

Conclusion

Quantum groups have been studied quite intensively for the last 15 years; this is mainly done in the algebraic context. The underlying mathematical structure is that of a Hopf algebra. If only Hopf algebras are used, however, the noncompact quantum groups will appear in a form where no analogue of the Haar measure is available. This was explained with the help of an example. For this reason, algebras without identity have to be considered, and multiplier Hopf algebras must be used instead of Hopf algebras. The first nontrivial examples of such multiplier Hopf algebras were the duals of compact quantum groups (or discrete quantum groups); see, e.g., refs. 1113. The next step in the theory was to obtain a class containing both the compact and discrete quantum groups. These are the multiplier Hopf algebras with integrals. This is a self-dual class in the sense of Pontryagin duality. In a sense, it is possible to do Fourier analysis in this framework.

Moreover, multiplier Hopf algebra with integrals have many nice features, which for Hopf algebra are true only when they are finite-dimensional. This has the consequence that many of the results, known for Hopf algebras in the finite-dimensional case, now also can be extended to the infinite-dimensional case by passing to multiplier Hopf algebras. One such result is the duality for actions (see ref. 14). Also, the quantum double construction of Drinfel'd turns out to be possible in this case and yields new highly nontrivial examples (see ref. 15).

It is expected that, more and more, multiplier Hopf algebras will be considered where before only Hopf algebras were used. One of the fields I am thinking of is the theory of special functions, where the existence of an invariant integral is crucial.

Finally, I refer again to the topological version of the theory described here. Multiplier Hopf *-algebras with positive integrals already show all the algebraic properties that are needed to develop the locally compact quantum groups, as explained in ref. 5. No wonder that this theory led to the development of locally compact quantum groups as given in ref. 4.

References

  • 1.Klimyk A, Schmüdgen K. Quantum Groups and Their Representations. Berlin: Springer; 1997. [Google Scholar]
  • 2.Drinfel'd V G. Proceedings ICM Berkeley. 1986. , 798–820. [Google Scholar]
  • 3.Jimbo M. Lett Math Phys. 1986;11:247–252. [Google Scholar]
  • 4.Kustermans J, Vaes S. Proc. Natl. Acad. Sci. USA. 2000. , 547–552. [Google Scholar]
  • 5.Kustermans J, Van Daele A. Int J Math. 1997;8:1067–1139. [Google Scholar]
  • 6.Van Daele A. Adv Math. 1998;140:323–366. [Google Scholar]
  • 7.Van Daele A, Zhang Y. A Survey on Multiplier Hopf Algebras. 1999. , to appear in the Proceedings of the Conference in Brussels on Hopf Algebras and Quantum Groups (1998). [Google Scholar]
  • 8.Van Daele A. Trans Am Math Soc. 1994;342:917–932. [Google Scholar]
  • 9.Kustermans J, Vaes S. C R Acad Sci Ser I. 1999;10:871–876. [Google Scholar]
  • 10.Kustermans, J. & Vaes, S. (2000) Ann. Sci. Ec. Norm. Sup., in press.
  • 11.Effros E, Ruan Z-J. Int J Math. 1994;5:681–723. [Google Scholar]
  • 12.Podles P, Woronowicz S L. Commun Math Phys. 1990;130:381–431. [Google Scholar]
  • 13.Van Daele A. J Algebra. 1996;180:431–444. [Google Scholar]
  • 14.Drabant B, Van Daele A, Zhang Y. Commun Alg. 1999;27(9):4117–4172. [Google Scholar]
  • 15.Drabant, B. & Van Daele, A. (1996) Algebras and Representation Theory, in press.

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES