<< . .

. 3
( : 11)



. . >>

constant.
As I explain in Appendix A, guessing can be largely automatized. It was already
mentioned in the Introduction that proving binomial (hypergeometric) identities can
be done by the computer, thanks to the “WZ-machinery” [130, 190, 194, 195, 196] (see
Footnote 5). Computing the degree bound is (in most cases) so easy that no computer is
needed. (You may use it if you want.) It is only the determination of the multiplicative
constant (item 5 above) by means of a special evaluation of the determinant or the
n
evaluation of a special coe¬cient (in our example we determined the coe¬cient of µ( 2 ) )
for which I am not able to o¬er a recipe so that things could be carried out on a
computer.
The reader should notice that crucial for a successful application of the method
is the existence of (at least) one parameter (in our example this is µ) to be able to
apply the polynomiality arguments that are the “engine” of the method. If there is no
parameter (such as in the determinant in Conjecture 49, or in the determinant (3.46)
which would solve the problem of q-enumerating totally symmetric plane partitions),
then we even cannot get started. (See the last paragraph of Section 2.1 for a few hints
of how to introduce a parameter into your determinant, in the case that you are short
of a parameter.)
On the other hand, a signi¬cant advantage of the “identi¬cation of factors method”
is that not only is it capable of proving evaluations of the form
det(M) = CLOSED FORM,
(where CLOSED FORM means a product/quotient of “nice” factors, such as (2.19) or
(2.17)), but also of proving evaluations of the form
det(M) = (CLOSED FORM) — (UGLY POLYNOMIAL), (2.21)
where, of course, M is a matrix containing (at least) one parameter, µ say. Exam-
ples of such determinant evaluations are (3.38), (3.39), (3.45) or (3.48). (The UGLY
POLYNOMIAL in (3.38), (3.39) and (3.48) is the respective sum on the right-hand
side, which in neither case can be simpli¬ed).
How would one approach the proof of such an evaluation? For one part, we already
know. “Identi¬cation of factors” enables us to show that (CLOSED FORM) divides
det(M) as a polynomial in µ. Then, comparison of degrees in µ on both sides of
(2.21) yields that (UGLY POLYNOMIAL) is a (at this point unknown) polynomial in
16 C. KRATTENTHALER

µ of some maximal degree, m say. How can we determine this polynomial? Nothing
“simpler” than that: We ¬nd m + 1 values e such that we are able to evaluate det(M)
at µ = e. If we then set µ = e in (2.21) and solve for (UGLY POLYNOMIAL), then we
obtain evaluations of (UGLY POLYNOMIAL) at m + 1 di¬erent values of µ. Clearly,
this su¬ces to ¬nd (UGLY POLYNOMIAL), e.g., by Lagrange interpolation.
I put “simpler” in quotes, because it is here where the crux is: We may not be able
to ¬nd enough such special evaluations of det(M). In fact, you may object: ˜Why all
these complications? If we should be able to ¬nd m + 1 special values of µ for which
we are able to evaluate det(M), then what prevents us from evaluating det(M) as a
whole, for generic µ?™ When I am talking of evaluating det(M) for µ = e, then what I
have in mind is that the evaluation of det(M) at µ = e is “nice” (i.e., gives a “closed
form,” with no “ugly” expression involved, such as in (2.21)), which is easier to identify
(that is, to guess; see Appendix A) and in most cases easier to prove. By experience,
such evaluations are rare. Therefore, the above described procedure will only work if
the degree of (UGLY POLYNOMIAL) is not too large. (If you are just a bit short of
evaluations, then ¬nding other informations about (UGLY POLYNOMIAL), like the
leading coe¬cient, may help to overcome the problem.)
To demonstrate this procedure by going through a concrete example is beyond the
scope of this article. We refer the reader to [28, 43, 50, 51, 89, 90] for places where this
procedure was successfully used to solve di¬cult enumeration problems on rhombus
tilings, respectively prove a conjectured constant term identity.


2.5. A differential/difference equation method. In this section I outline a
method for the evaluation of determinants, often used by Vitaly Tarasov and Alexander
Varchenko, which, as the preceding method, also requires (at least) one parameter.
Suppose we are given a matrix M = M(z), depending on the parameter z, of which
we want to compute the determinant. Furthermore, suppose we know that M satis¬es
a di¬erential equation of the form

d
M(z) = T (z)M(z), (2.22)
dz

where T (z) is some other known matrix. Then, by elementary linear algebra, we obtain
a di¬erential equation for the determinant,

d
det M(z) = Tr(T (z)) · det M(z), (2.23)
dz

which is usually easy to solve. (In fact, the di¬erential operator in (2.22) and (2.23)
could be replaced by any operator. In particular, we could replace d/dz by the di¬erence
operator with respect to z, in which case (2.23) is usually easy to solve as well.)
Any method is best illustrated by an example. Let us try this method on the deter-
minant (1.2). Right, we did already evaluate this determinant twice (see Sections 2.2
and 2.3), but let us pretend that we have forgotten all this.
Of course, application of the method to (1.2) itself does not seem to be extremely
promising, because that would involve the di¬erentiation of binomial coe¬cients. So,
ADVANCED DETERMINANT CALCULUS 17

let us ¬rst take some factors out of the determinant (as we also did in Section 2.2),
n
a+b (a + b)!
det =
a’i+j (a ’ i + n)! (b + i ’ 1)!
1¤i,j¤n
i=1
— det (a ’ i + n)(a ’ i + n ’ 1) · · · (a ’ i + j + 1)
1¤i,j¤n

· (b + i ’ j + 1)(b + i ’ j + 2) · · · (b + i ’ 1) .
Let us denote the matrix underlying the determinant on the right-hand side of this
equation by Mn (a). In order to apply the above method, we have need for a matrix
Tn (a) such that
d
Mn (a) = Tn(a)Mn(a). (2.24)
da
Similar to the procedure of Section 2.6, the best idea is to go to the computer, crank
out Tn (a) for n = 1, 2, 3, 4, . . . , and, out of the data, make a guess for Tn (a). Indeed, it
su¬ces that I display T5(a),
«1
’ 3+a+b + 4+a+b
1 1 1 4 6 6
+ 2+a+b + 3+a+b + 4+a+b
1+a+b 4+a+b
¬ 1 1 1 3
0 + 2+a+b + 3+a+b
¬ 1+a+b 3+a+b
¬ 1 1
0 0 + 2+a+b
¬ 1+a+b
 0 0 0
0 0 0
1
’ 3+a+b + 4+a+b ’ 1+a+b + 2+a+b ’ 3+a+b + 4+a+b
4 8 4 1 3 3
2+a+b
·
’ 2+a+b + 3+a+b ’ 2+a+b + 3+a+b
3 3 1 2 1
·
1+a+b
·
’ 1+a+b + 2+a+b
2 1 1
·
2+a+b

1 1
1+a+b 1+a+b
0 0
(in this display, the ¬rst line contains columns 1, 2, 3 of T5(a), while the second line
contains the remaining columns), so that you are forced to conclude that, apparently,
it must be true that
n’i’1
n’i j ’i’1 (’1)k
Tn (a) = .
j’i a+b+n’i’k
k
k=0 1¤i,j,¤n

That (2.24) holds with this choice of Tn(a) is then easy to verify. Consequently, by
means of (2.23), we have
n’1
n’
d
det Mn (a) = det Mn (a),
da a+b+
=1

so that
n’1
Mn(a) = constant · (a + b + )n’ . (2.25)
=1
n
The constant is found to be (’1)( 2 ) n’1 !, e.g., by dividing both sides of (2.25) by
=0
n
a(2 ) , letting a tend to in¬nity, and applying (2.2) to the remaining determinant.
18 C. KRATTENTHALER

More sophisticated applications of this method (actually, of a version for systems of
di¬erence operators) can be found in [175, Proof of Theorem 5.14] and [176, Proofs of
Theorems 5.9, 5.10, 5.11], in the context of the Knizhnik“Zamolodchikov equations.


2.6. LU-factorization. This is George Andrews™ favourite method. Starting point
is the well-known fact (see [53, p. 33¬]) that, given a square matrix M, there exists,
under suitable, not very stringent conditions (in particular, these are satis¬ed if all
top-left principal minors of M are nonzero), a unique lower triangular matrix L and a
unique upper diagonal matrix U, the latter with all entries along the diagonal equal to
1, such that

M = L · U. (2.26)

This unique factorization of the matrix M is known as the L(ower triangular)U(pper
triangular)-factorization of M, or as well as the Gauß decomposition of M.
Equivalently, for a square matrix M (satisfying these conditions) there exists a unique
lower triangular matrix L and a unique upper triangular matrix U, the latter with all
entries along the diagonal equal to 1, such that

M · U = L. (2.27)

Clearly, once you know L and U, the determinant of M is easily computed, as it equals
the product of the diagonal entries of L.
Now, let us suppose that we are given a family (Mn )n≥0 of matrices, where Mn is
an n — n matrix, n = 0, 1, . . . , of which we want to compute the determinant. Maybe
Mn is the determinant in (1.3). By the above, we know that (normally) there exist
uniquely determined matrices Ln and Un, n = 0, 1, . . . , Ln being lower triangular, Un
being upper triangular with all diagonal entries equal to 1, such that

Mn · Un = Ln . (2.28)

However, we do not know what the matrices Ln and Un are. What George Andrews
does is that he goes to his computer, cranks out Ln and Un for n = 1, 2, 3, 4, . . . (this
just amounts to solving a system of linear equations), and, out of the data, tries to
guess what the coe¬cients of the matrices Ln and Un are. Once he has worked out a
guess, he somehow proves that his guessed matrices Ln and Un do indeed satisfy (2.28).
This program is carried out in [10] for the family of determinants in (1.3). As it turns
out, guessing is really easy, while the underlying hypergeometric identities which are
needed for the proof of (2.28) are (from a hypergeometric viewpoint) quite interesting.
For a demonstration of the method of LU-factorization, we will content ourselves
here with trying the method on the Vandermonde determinant. That is, let Mn be the
determinant in (2.1). We go to the computer and crank out the matrices Ln and Un
for small values of n. For the purpose of guessing, it su¬ces that I just display the
matrices L5 and U5 . They are
ADVANCED DETERMINANT CALCULUS 19


«
1 0 0
¬1 (X2 ’ X1 ) 0
¬
L5 = ¬1 (X3 ’ X1 ) (X3 ’ X1 )(X3 ’ X2 )
¬
1 (X4 ’ X1 ) (X4 ’ X1 )(X4 ’ X2 )
(X5 ’ X1 ) (X5 ’ X1 )(X5 ’ X2 )
1

0 0
·
0 0 ·
·
0 0 ·

(X4 ’ X1 )(X4 ’ X2 )(X4 ’ X3 ) 0
(X5 ’ X1 )(X5 ’ X2 )(X5 ’ X3 ) (X5 ’ X1 )(X5 ’ X2 )(X5 ’ X3 )(X5 ’ X4 )

(in this display, the ¬rst line contains columns 1, 2, 3 of L5 , while the second line contains
the remaining columns), and
« 
’e1 (X1) ’e3 (X1 , X2 , X3 )
1 e2 (X1 , X2 ) e4 (X1 , X2 , X3 , X4 )
¬0 ’e3 (X1 , X2 , X3 , X4 )·
’e1 (X1 , X2 )
1 e2 (X1 , X2 , X3 )
¬ ·
U5 = ¬0 e2 (X1 , X2 , X3 , X4 )· ,
’e1 (X1 , X2 , X3 )
0 1
¬ ·
0 ’e1 (X1 , X2 , X3 , X4 )
0 0 1
0 0 0 0 1


1¤i1 <···<im ¤s Xi1 · · · Xim denotes the m-th elementary
where em(X1 , X2 , . . . , Xs ) =
symmetric function.
Having seen that, it will not take you for long to guess that, apparently, Ln is given
by
j’1
(Xi ’ Xk )
Ln = ,
1¤i,j¤n
k=1
and that Un is given by
Un = (’1)j’i ej’i (X1 , . . . , Xj’1 ) ,
1¤i,j¤n

where, of course, em (X1 , . . . ) := 0 if m < 0. That (2.28) holds with these choices of Ln
and Un is easy to verify. Thus, the Vandermonde determinant equals the product of
diagonal entries of Ln , which is exactly the product on the right-hand side of (2.1).
Applications of LU-factorization are abundant in the work of George Andrews [4,
5, 6, 7, 8, 10]. All of them concern solutions to di¬cult enumeration problems on
various types of plane partitions. To mention another example, Aomoto and Kato [11,
Theorem 3] computed the LU-factorization of a matrix which arose in the theory of
q-di¬erence equations, thus proving a conjecture by Mimachi [118].
Needless to say that this allows for variations. You may try to guess (2.26) directly
(and not its variation (2.27)), or you may try to guess the U(pper triangular)L(ower
triangular) factorization, or its variation in the style of (2.27). I am saying this because
it may be easy to guess the form of one of these variations, while it can be very di¬cult
to guess the form of another.
It should be observed that the way LU-factorization is used here in order to evaluate
determinants is very much in the same spirit as “identi¬cation of factors” as described in
the previous section. In both cases, the essential steps are to ¬rst guess something, and
then prove the guess. Therefore, the remarks from the previous section about guessing
20 C. KRATTENTHALER

and proving binomial (hypergeometric) identities apply here as well. In particular, for
guessing you are once more referred to Appendix A.
It is important to note that, as opposed to “condensation” or “identi¬cation of fac-
tors,” LU-factorization does not require any parameter. So, in principle, it is applicable
to any determinant (which satis¬es the aforementioned conditions). If there are limita-
tions, then, from my experience, it is that the coe¬cients which have to be guessed in
LU-factorization tend to be more complicated than in “identi¬cation of factors”. That
is, guessing (2.28) (or one of its variations) may sometimes be not so easy.
2.7. Hankel determinants. A Hankel determinant is a determinant of a matrix
which has constant entries along antidiagonals, i.e., it is a determinant of the form
det (ci+j ).
1¤i,j,¤n

If you encounter a Hankel determinant, which you think evaluates nicely, then expect
the evaluation of your Hankel determinant to be found within the domain of continued
fractions and orthogonal polynomials. In this section I explain what this connection is.
To make things concrete, let us suppose that we want to evaluate
det (Bi+j+2 ), (2.29)
0¤i,j¤n’1

where Bk denotes the k-th Bernoulli number. (The Bernoulli numbers are de¬ned via
their generating function, ∞ Bk z k /k! = z/(ez ’ 1).) You have to try hard if you
k=0
want to ¬nd an evaluation of (2.29) explicitly in the literature. Indeed, you can ¬nd
it, hidden in Appendix A.5 of [108]. However, even if you are not able to discover this
reference (which I would not have as well, unless the author of [108] would not have
drawn my attention to it), there is a rather straight-forward way to ¬nd an evaluation
of (2.29), which I outline below. It is based on the fact, and this is the main point of
this section, that evaluations of Hankel determinants like (2.29) are, at least implicitly,
in the literature on the theory of orthogonal polynomials and continued fractions, which
is very accessible today.
So, let us review the relevant facts about orthogonal polynomials and continued
fractions (see [76, 81, 128, 174, 186, 188] for more information on these topics).
We begin by citing the result, due to Heilermann, which makes the connection be-
tween Hankel determinants and continued fractions.
Theorem 11. (Cf. [188, Theorem 51.1] or [186, Corollaire 6, (19), on p. IV-17]). Let
∞ k
(µk )k≥0 be a sequence of numbers with generating function k=0 µk x written in the
form

µ0
µk xk = . (2.30)
b1 x2
1 + a0 x ’
k=0
b2 x2
1 + a1 x ’
1 + a2 x ’ · · ·
Then the Hankel determinant det0¤i,j¤n’1 (µi+j ) equals µn bn’1 bn’2 · · · b2 bn’1 .
01 2 n’2

(We remark that a continued fraction of the type as in (2.30) is called a J-fraction.)
Okay, that means we would have evaluated (2.29) once we are able to explicitly
expand the generating function ∞ Bk+2 xk in terms of a continued fraction of the
k=0
ADVANCED DETERMINANT CALCULUS 21

form of the right-hand side of (2.30). Using the tools explained in Appendix A, it is
easy to work out a conjecture,

1/6
Bk+2 xk = , (2.31)
b1 x2
1’
k=0
b2 x2
1’
1’ ···
where bi = ’i(i + 1)2 (i + 2)/4(2i + 1)(2i + 3), i = 1, 2, . . . . If we would ¬nd this
expansion in the literature then we would be done. But if not (which is the case here),
how to prove such an expansion? The key is orthogonal polynomials.
A sequence (pn (x))n≥0 of polynomials is called (formally) orthogonal if pn (x) has de-
gree n, n = 0, 1, . . . , and if there exists a linear functional L such that L(pn (x)pm (x)) =
δmncn for some sequence (cn )n≥0 of nonzero numbers, with δm,n denoting the Kronecker
delta (i.e., δm,n = 1 if m = n and δm,n = 0 otherwise).
The ¬rst important theorem in the theory of orthogonal polynomials is Favard™s
Theorem, which gives an unexpected characterization for sequences of orthogonal poly-
nomials, in that it completely avoids the mention of the functional L.
Theorem 12. (Cf. [186, Th´or`me 9 on p. I-4] or [188, Theorem 50.1]). Let (pn (x))n≥0
ee
be a sequence of monic polynomials, the polynomial pn (x) having degree n, n = 0, 1, . . . .
Then the sequence (pn (x)) is (formally) orthogonal if and only if there exist sequences
(an )n≥1 and (bn )n≥1 , with bn = 0 for all n ≥ 1, such that the three-term recurrence
pn+1 (x) = (an + x)pn(x) ’ bnpn’1 (x), for n ≥ 1, (2.32)
holds, with initial conditions p0 (x) = 1 and p1 (x) = x + a0 .
What is the connection between orthogonal polynomials and continued fractions?
This question is answered by the next theorem, the link being the generating function
of the moments.
Theorem 13. (Cf. [188, Theorem 51.1] or [186, Proposition 1, (7), on p. V-5]). Let
(pn (x))n≥0 be a sequence of monic polynomials, the polynomial pn (x) having degree n,
which is orthogonal with respect to some functional L. Let
pn+1 (x) = (an + x)pn (x) ’ bnpn’1 (x) (2.33)
be the corresponding three-term recurrence which is guaranteed by Favard™s theorem.
Then the generating function ∞ µk xk for the moments µk = L(xk ) satis¬es (2.30)
k=0
with the ai ™s and bi ™s being the coe¬cients in the three-term recurrence (2.33).
Thus, what we have to do is to ¬nd orthogonal polynomials (pn (x))n≥0 , the three-term
recurrence of which is explicitly known, and which are orthogonal with respect to some
linear functional L whose moments L(xk ) are exactly equal to Bk+2 . So, what would
be very helpful at this point is some sort of table of orthogonal polynomials. Indeed,
there is such a table for hypergeometric and basic hypergeometric orthogonal polynomi-
als, proposed by Richard Askey (therefore called the “Askey table”), and compiled by
Koekoek and Swarttouw [81].
Indeed, in Section 1.4 of [81], we ¬nd the family of orthogonal polynomials that is
of relevance here, the continuous Hahn polynomials, ¬rst studied by Atakishiyev and
Suslov [13] and Askey [12]. These polynomials depend on four parameters, a, b, c, d. It
22 C. KRATTENTHALER

is just the special choice a = b = c = d = 1 which is of interest to us. The theorem
below lists the relevant facts about these special polynomials.
Theorem 14. The continuous Hahn polynomials with parameters a = b = c = d = 1,
(pn (x))n≥0 , are the monic polynomials de¬ned by


√ (’n)k (n + 3)k (1 + x ’1)k
2
n (n + 1)! (n + 2)!
pn (x) = ( ’1) , (2.34)
k! (k + 1)!2
(2n + 2)! k=0

with the shifted factorial (a)k de¬ned as previously (see (2.19)). These polynomials
satisfy the three-term recurrence
n(n + 1)2 (n + 2)
pn+1 (x) = xpn (x) + pn’1 (x). (2.35)

<< . .

. 3
( : 11)



. . >>