ńņš. 3 |

As I explain in Appendix A, guessing can be largely automatized. It was already

mentioned in the Introduction that proving binomial (hypergeometric) identities can

be done by the computer, thanks to the āWZ-machineryā [130, 190, 194, 195, 196] (see

Footnote 5). Computing the degree bound is (in most cases) so easy that no computer is

needed. (You may use it if you want.) It is only the determination of the multiplicative

constant (item 5 above) by means of a special evaluation of the determinant or the

n

evaluation of a special coeļ¬cient (in our example we determined the coeļ¬cient of Āµ( 2 ) )

for which I am not able to oļ¬er a recipe so that things could be carried out on a

computer.

The reader should notice that crucial for a successful application of the method

is the existence of (at least) one parameter (in our example this is Āµ) to be able to

apply the polynomiality arguments that are the āengineā of the method. If there is no

parameter (such as in the determinant in Conjecture 49, or in the determinant (3.46)

which would solve the problem of q-enumerating totally symmetric plane partitions),

then we even cannot get started. (See the last paragraph of Section 2.1 for a few hints

of how to introduce a parameter into your determinant, in the case that you are short

of a parameter.)

On the other hand, a signiļ¬cant advantage of the āidentiļ¬cation of factors methodā

is that not only is it capable of proving evaluations of the form

det(M) = CLOSED FORM,

(where CLOSED FORM means a product/quotient of āniceā factors, such as (2.19) or

(2.17)), but also of proving evaluations of the form

det(M) = (CLOSED FORM) Ć— (UGLY POLYNOMIAL), (2.21)

where, of course, M is a matrix containing (at least) one parameter, Āµ say. Exam-

ples of such determinant evaluations are (3.38), (3.39), (3.45) or (3.48). (The UGLY

POLYNOMIAL in (3.38), (3.39) and (3.48) is the respective sum on the right-hand

side, which in neither case can be simpliļ¬ed).

How would one approach the proof of such an evaluation? For one part, we already

know. āIdentiļ¬cation of factorsā enables us to show that (CLOSED FORM) divides

det(M) as a polynomial in Āµ. Then, comparison of degrees in Āµ on both sides of

(2.21) yields that (UGLY POLYNOMIAL) is a (at this point unknown) polynomial in

16 C. KRATTENTHALER

Āµ of some maximal degree, m say. How can we determine this polynomial? Nothing

āsimplerā than that: We ļ¬nd m + 1 values e such that we are able to evaluate det(M)

at Āµ = e. If we then set Āµ = e in (2.21) and solve for (UGLY POLYNOMIAL), then we

obtain evaluations of (UGLY POLYNOMIAL) at m + 1 diļ¬erent values of Āµ. Clearly,

this suļ¬ces to ļ¬nd (UGLY POLYNOMIAL), e.g., by Lagrange interpolation.

I put āsimplerā in quotes, because it is here where the crux is: We may not be able

to ļ¬nd enough such special evaluations of det(M). In fact, you may object: ā˜Why all

these complications? If we should be able to ļ¬nd m + 1 special values of Āµ for which

we are able to evaluate det(M), then what prevents us from evaluating det(M) as a

whole, for generic Āµ?ā™ When I am talking of evaluating det(M) for Āµ = e, then what I

have in mind is that the evaluation of det(M) at Āµ = e is āniceā (i.e., gives a āclosed

form,ā with no āuglyā expression involved, such as in (2.21)), which is easier to identify

(that is, to guess; see Appendix A) and in most cases easier to prove. By experience,

such evaluations are rare. Therefore, the above described procedure will only work if

the degree of (UGLY POLYNOMIAL) is not too large. (If you are just a bit short of

evaluations, then ļ¬nding other informations about (UGLY POLYNOMIAL), like the

leading coeļ¬cient, may help to overcome the problem.)

To demonstrate this procedure by going through a concrete example is beyond the

scope of this article. We refer the reader to [28, 43, 50, 51, 89, 90] for places where this

procedure was successfully used to solve diļ¬cult enumeration problems on rhombus

tilings, respectively prove a conjectured constant term identity.

2.5. A differential/difference equation method. In this section I outline a

method for the evaluation of determinants, often used by Vitaly Tarasov and Alexander

Varchenko, which, as the preceding method, also requires (at least) one parameter.

Suppose we are given a matrix M = M(z), depending on the parameter z, of which

we want to compute the determinant. Furthermore, suppose we know that M satisļ¬es

a diļ¬erential equation of the form

d

M(z) = T (z)M(z), (2.22)

dz

where T (z) is some other known matrix. Then, by elementary linear algebra, we obtain

a diļ¬erential equation for the determinant,

d

det M(z) = Tr(T (z)) Ā· det M(z), (2.23)

dz

which is usually easy to solve. (In fact, the diļ¬erential operator in (2.22) and (2.23)

could be replaced by any operator. In particular, we could replace d/dz by the diļ¬erence

operator with respect to z, in which case (2.23) is usually easy to solve as well.)

Any method is best illustrated by an example. Let us try this method on the deter-

minant (1.2). Right, we did already evaluate this determinant twice (see Sections 2.2

and 2.3), but let us pretend that we have forgotten all this.

Of course, application of the method to (1.2) itself does not seem to be extremely

promising, because that would involve the diļ¬erentiation of binomial coeļ¬cients. So,

ADVANCED DETERMINANT CALCULUS 17

let us ļ¬rst take some factors out of the determinant (as we also did in Section 2.2),

n

a+b (a + b)!

det =

aā’i+j (a ā’ i + n)! (b + i ā’ 1)!

1ā¤i,jā¤n

i=1

Ć— det (a ā’ i + n)(a ā’ i + n ā’ 1) Ā· Ā· Ā· (a ā’ i + j + 1)

1ā¤i,jā¤n

Ā· (b + i ā’ j + 1)(b + i ā’ j + 2) Ā· Ā· Ā· (b + i ā’ 1) .

Let us denote the matrix underlying the determinant on the right-hand side of this

equation by Mn (a). In order to apply the above method, we have need for a matrix

Tn (a) such that

d

Mn (a) = Tn(a)Mn(a). (2.24)

da

Similar to the procedure of Section 2.6, the best idea is to go to the computer, crank

out Tn (a) for n = 1, 2, 3, 4, . . . , and, out of the data, make a guess for Tn (a). Indeed, it

suļ¬ces that I display T5(a),

ļ£«1

ā’ 3+a+b + 4+a+b

1 1 1 4 6 6

+ 2+a+b + 3+a+b + 4+a+b

1+a+b 4+a+b

ļ£¬ 1 1 1 3

0 + 2+a+b + 3+a+b

ļ£¬ 1+a+b 3+a+b

ļ£¬ 1 1

0 0 + 2+a+b

ļ£¬ 1+a+b

ļ£ 0 0 0

0 0 0

1ļ£¶

ā’ 3+a+b + 4+a+b ā’ 1+a+b + 2+a+b ā’ 3+a+b + 4+a+b

4 8 4 1 3 3

2+a+b

ļ£·

ā’ 2+a+b + 3+a+b ā’ 2+a+b + 3+a+b

3 3 1 2 1

ļ£·

1+a+b

ļ£·

ā’ 1+a+b + 2+a+b

2 1 1

ļ£·

2+a+b

ļ£ø

1 1

1+a+b 1+a+b

0 0

(in this display, the ļ¬rst line contains columns 1, 2, 3 of T5(a), while the second line

contains the remaining columns), so that you are forced to conclude that, apparently,

it must be true that

nā’iā’1

nā’i j ā’iā’1 (ā’1)k

Tn (a) = .

jā’i a+b+nā’iā’k

k

k=0 1ā¤i,j,ā¤n

That (2.24) holds with this choice of Tn(a) is then easy to verify. Consequently, by

means of (2.23), we have

nā’1

nā’

d

det Mn (a) = det Mn (a),

da a+b+

=1

so that

nā’1

Mn(a) = constant Ā· (a + b + )nā’ . (2.25)

=1

n

The constant is found to be (ā’1)( 2 ) nā’1 !, e.g., by dividing both sides of (2.25) by

=0

n

a(2 ) , letting a tend to inļ¬nity, and applying (2.2) to the remaining determinant.

18 C. KRATTENTHALER

More sophisticated applications of this method (actually, of a version for systems of

diļ¬erence operators) can be found in [175, Proof of Theorem 5.14] and [176, Proofs of

Theorems 5.9, 5.10, 5.11], in the context of the Knizhnikā“Zamolodchikov equations.

2.6. LU-factorization. This is George Andrewsā™ favourite method. Starting point

is the well-known fact (see [53, p. 33ļ¬]) that, given a square matrix M, there exists,

under suitable, not very stringent conditions (in particular, these are satisļ¬ed if all

top-left principal minors of M are nonzero), a unique lower triangular matrix L and a

unique upper diagonal matrix U, the latter with all entries along the diagonal equal to

1, such that

M = L Ā· U. (2.26)

This unique factorization of the matrix M is known as the L(ower triangular)U(pper

triangular)-factorization of M, or as well as the GauĆ decomposition of M.

Equivalently, for a square matrix M (satisfying these conditions) there exists a unique

lower triangular matrix L and a unique upper triangular matrix U, the latter with all

entries along the diagonal equal to 1, such that

M Ā· U = L. (2.27)

Clearly, once you know L and U, the determinant of M is easily computed, as it equals

the product of the diagonal entries of L.

Now, let us suppose that we are given a family (Mn )nā„0 of matrices, where Mn is

an n Ć— n matrix, n = 0, 1, . . . , of which we want to compute the determinant. Maybe

Mn is the determinant in (1.3). By the above, we know that (normally) there exist

uniquely determined matrices Ln and Un, n = 0, 1, . . . , Ln being lower triangular, Un

being upper triangular with all diagonal entries equal to 1, such that

Mn Ā· Un = Ln . (2.28)

However, we do not know what the matrices Ln and Un are. What George Andrews

does is that he goes to his computer, cranks out Ln and Un for n = 1, 2, 3, 4, . . . (this

just amounts to solving a system of linear equations), and, out of the data, tries to

guess what the coeļ¬cients of the matrices Ln and Un are. Once he has worked out a

guess, he somehow proves that his guessed matrices Ln and Un do indeed satisfy (2.28).

This program is carried out in [10] for the family of determinants in (1.3). As it turns

out, guessing is really easy, while the underlying hypergeometric identities which are

needed for the proof of (2.28) are (from a hypergeometric viewpoint) quite interesting.

For a demonstration of the method of LU-factorization, we will content ourselves

here with trying the method on the Vandermonde determinant. That is, let Mn be the

determinant in (2.1). We go to the computer and crank out the matrices Ln and Un

for small values of n. For the purpose of guessing, it suļ¬ces that I just display the

matrices L5 and U5 . They are

ADVANCED DETERMINANT CALCULUS 19

ļ£«

1 0 0

ļ£¬1 (X2 ā’ X1 ) 0

ļ£¬

L5 = ļ£¬1 (X3 ā’ X1 ) (X3 ā’ X1 )(X3 ā’ X2 )

ļ£¬

ļ£1 (X4 ā’ X1 ) (X4 ā’ X1 )(X4 ā’ X2 )

(X5 ā’ X1 ) (X5 ā’ X1 )(X5 ā’ X2 )

1

ļ£¶

0 0

ļ£·

0 0 ļ£·

ļ£·

0 0 ļ£·

ļ£ø

(X4 ā’ X1 )(X4 ā’ X2 )(X4 ā’ X3 ) 0

(X5 ā’ X1 )(X5 ā’ X2 )(X5 ā’ X3 ) (X5 ā’ X1 )(X5 ā’ X2 )(X5 ā’ X3 )(X5 ā’ X4 )

(in this display, the ļ¬rst line contains columns 1, 2, 3 of L5 , while the second line contains

the remaining columns), and

ļ£« ļ£¶

ā’e1 (X1) ā’e3 (X1 , X2 , X3 )

1 e2 (X1 , X2 ) e4 (X1 , X2 , X3 , X4 )

ļ£¬0 ā’e3 (X1 , X2 , X3 , X4 )ļ£·

ā’e1 (X1 , X2 )

1 e2 (X1 , X2 , X3 )

ļ£¬ ļ£·

U5 = ļ£¬0 e2 (X1 , X2 , X3 , X4 )ļ£· ,

ā’e1 (X1 , X2 , X3 )

0 1

ļ£¬ ļ£·

ļ£0 ā’e1 (X1 , X2 , X3 , X4 )ļ£ø

0 0 1

0 0 0 0 1

1ā¤i1 <Ā·Ā·Ā·<im ā¤s Xi1 Ā· Ā· Ā· Xim denotes the m-th elementary

where em(X1 , X2 , . . . , Xs ) =

symmetric function.

Having seen that, it will not take you for long to guess that, apparently, Ln is given

by

jā’1

(Xi ā’ Xk )

Ln = ,

1ā¤i,jā¤n

k=1

and that Un is given by

Un = (ā’1)jā’i ejā’i (X1 , . . . , Xjā’1 ) ,

1ā¤i,jā¤n

where, of course, em (X1 , . . . ) := 0 if m < 0. That (2.28) holds with these choices of Ln

and Un is easy to verify. Thus, the Vandermonde determinant equals the product of

diagonal entries of Ln , which is exactly the product on the right-hand side of (2.1).

Applications of LU-factorization are abundant in the work of George Andrews [4,

5, 6, 7, 8, 10]. All of them concern solutions to diļ¬cult enumeration problems on

various types of plane partitions. To mention another example, Aomoto and Kato [11,

Theorem 3] computed the LU-factorization of a matrix which arose in the theory of

q-diļ¬erence equations, thus proving a conjecture by Mimachi [118].

Needless to say that this allows for variations. You may try to guess (2.26) directly

(and not its variation (2.27)), or you may try to guess the U(pper triangular)L(ower

triangular) factorization, or its variation in the style of (2.27). I am saying this because

it may be easy to guess the form of one of these variations, while it can be very diļ¬cult

to guess the form of another.

It should be observed that the way LU-factorization is used here in order to evaluate

determinants is very much in the same spirit as āidentiļ¬cation of factorsā as described in

the previous section. In both cases, the essential steps are to ļ¬rst guess something, and

then prove the guess. Therefore, the remarks from the previous section about guessing

20 C. KRATTENTHALER

and proving binomial (hypergeometric) identities apply here as well. In particular, for

guessing you are once more referred to Appendix A.

It is important to note that, as opposed to ācondensationā or āidentiļ¬cation of fac-

tors,ā LU-factorization does not require any parameter. So, in principle, it is applicable

to any determinant (which satisļ¬es the aforementioned conditions). If there are limita-

tions, then, from my experience, it is that the coeļ¬cients which have to be guessed in

LU-factorization tend to be more complicated than in āidentiļ¬cation of factorsā. That

is, guessing (2.28) (or one of its variations) may sometimes be not so easy.

2.7. Hankel determinants. A Hankel determinant is a determinant of a matrix

which has constant entries along antidiagonals, i.e., it is a determinant of the form

det (ci+j ).

1ā¤i,j,ā¤n

If you encounter a Hankel determinant, which you think evaluates nicely, then expect

the evaluation of your Hankel determinant to be found within the domain of continued

fractions and orthogonal polynomials. In this section I explain what this connection is.

To make things concrete, let us suppose that we want to evaluate

det (Bi+j+2 ), (2.29)

0ā¤i,jā¤nā’1

where Bk denotes the k-th Bernoulli number. (The Bernoulli numbers are deļ¬ned via

their generating function, ā Bk z k /k! = z/(ez ā’ 1).) You have to try hard if you

k=0

want to ļ¬nd an evaluation of (2.29) explicitly in the literature. Indeed, you can ļ¬nd

it, hidden in Appendix A.5 of [108]. However, even if you are not able to discover this

reference (which I would not have as well, unless the author of [108] would not have

drawn my attention to it), there is a rather straight-forward way to ļ¬nd an evaluation

of (2.29), which I outline below. It is based on the fact, and this is the main point of

this section, that evaluations of Hankel determinants like (2.29) are, at least implicitly,

in the literature on the theory of orthogonal polynomials and continued fractions, which

is very accessible today.

So, let us review the relevant facts about orthogonal polynomials and continued

fractions (see [76, 81, 128, 174, 186, 188] for more information on these topics).

We begin by citing the result, due to Heilermann, which makes the connection be-

tween Hankel determinants and continued fractions.

Theorem 11. (Cf. [188, Theorem 51.1] or [186, Corollaire 6, (19), on p. IV-17]). Let

ā k

(Āµk )kā„0 be a sequence of numbers with generating function k=0 Āµk x written in the

form

ā

Āµ0

Āµk xk = . (2.30)

b1 x2

1 + a0 x ā’

k=0

b2 x2

1 + a1 x ā’

1 + a2 x ā’ Ā· Ā· Ā·

Then the Hankel determinant det0ā¤i,jā¤nā’1 (Āµi+j ) equals Āµn bnā’1 bnā’2 Ā· Ā· Ā· b2 bnā’1 .

01 2 nā’2

(We remark that a continued fraction of the type as in (2.30) is called a J-fraction.)

Okay, that means we would have evaluated (2.29) once we are able to explicitly

expand the generating function ā Bk+2 xk in terms of a continued fraction of the

k=0

ADVANCED DETERMINANT CALCULUS 21

form of the right-hand side of (2.30). Using the tools explained in Appendix A, it is

easy to work out a conjecture,

ā

1/6

Bk+2 xk = , (2.31)

b1 x2

1ā’

k=0

b2 x2

1ā’

1ā’ Ā·Ā·Ā·

where bi = ā’i(i + 1)2 (i + 2)/4(2i + 1)(2i + 3), i = 1, 2, . . . . If we would ļ¬nd this

expansion in the literature then we would be done. But if not (which is the case here),

how to prove such an expansion? The key is orthogonal polynomials.

A sequence (pn (x))nā„0 of polynomials is called (formally) orthogonal if pn (x) has de-

gree n, n = 0, 1, . . . , and if there exists a linear functional L such that L(pn (x)pm (x)) =

Ī“mncn for some sequence (cn )nā„0 of nonzero numbers, with Ī“m,n denoting the Kronecker

delta (i.e., Ī“m,n = 1 if m = n and Ī“m,n = 0 otherwise).

The ļ¬rst important theorem in the theory of orthogonal polynomials is Favardā™s

Theorem, which gives an unexpected characterization for sequences of orthogonal poly-

nomials, in that it completely avoids the mention of the functional L.

Theorem 12. (Cf. [186, ThĀ“or`me 9 on p. I-4] or [188, Theorem 50.1]). Let (pn (x))nā„0

ee

be a sequence of monic polynomials, the polynomial pn (x) having degree n, n = 0, 1, . . . .

Then the sequence (pn (x)) is (formally) orthogonal if and only if there exist sequences

(an )nā„1 and (bn )nā„1 , with bn = 0 for all n ā„ 1, such that the three-term recurrence

pn+1 (x) = (an + x)pn(x) ā’ bnpnā’1 (x), for n ā„ 1, (2.32)

holds, with initial conditions p0 (x) = 1 and p1 (x) = x + a0 .

What is the connection between orthogonal polynomials and continued fractions?

This question is answered by the next theorem, the link being the generating function

of the moments.

Theorem 13. (Cf. [188, Theorem 51.1] or [186, Proposition 1, (7), on p. V-5]). Let

(pn (x))nā„0 be a sequence of monic polynomials, the polynomial pn (x) having degree n,

which is orthogonal with respect to some functional L. Let

pn+1 (x) = (an + x)pn (x) ā’ bnpnā’1 (x) (2.33)

be the corresponding three-term recurrence which is guaranteed by Favardā™s theorem.

Then the generating function ā Āµk xk for the moments Āµk = L(xk ) satisļ¬es (2.30)

k=0

with the ai ā™s and bi ā™s being the coeļ¬cients in the three-term recurrence (2.33).

Thus, what we have to do is to ļ¬nd orthogonal polynomials (pn (x))nā„0 , the three-term

recurrence of which is explicitly known, and which are orthogonal with respect to some

linear functional L whose moments L(xk ) are exactly equal to Bk+2 . So, what would

be very helpful at this point is some sort of table of orthogonal polynomials. Indeed,

there is such a table for hypergeometric and basic hypergeometric orthogonal polynomi-

als, proposed by Richard Askey (therefore called the āAskey tableā), and compiled by

Koekoek and Swarttouw [81].

Indeed, in Section 1.4 of [81], we ļ¬nd the family of orthogonal polynomials that is

of relevance here, the continuous Hahn polynomials, ļ¬rst studied by Atakishiyev and

Suslov [13] and Askey [12]. These polynomials depend on four parameters, a, b, c, d. It

22 C. KRATTENTHALER

is just the special choice a = b = c = d = 1 which is of interest to us. The theorem

below lists the relevant facts about these special polynomials.

Theorem 14. The continuous Hahn polynomials with parameters a = b = c = d = 1,

(pn (x))nā„0 , are the monic polynomials deļ¬ned by

ā

ā

ā (ā’n)k (n + 3)k (1 + x ā’1)k

2

n (n + 1)! (n + 2)!

pn (x) = ( ā’1) , (2.34)

k! (k + 1)!2

(2n + 2)! k=0

with the shifted factorial (a)k deļ¬ned as previously (see (2.19)). These polynomials

satisfy the three-term recurrence

n(n + 1)2 (n + 2)

pn+1 (x) = xpn (x) + pnā’1 (x). (2.35)

ńņš. 3 |