. 1
( : 11)

. . >>



Institut f¨r Mathematik der Universit¨t Wien,
u a
Strudlhofgasse 4, A-1090 Wien, Austria.
math.CO/9902004 v3 31 May 1999

E-mail: kratt@pap.univie.ac.at
WWW: http://radon.mat.univie.ac.at/People/kratt
Dedicated to the pioneer of determinant evaluations (among many other things),
George Andrews

Abstract. The purpose of this article is threefold. First, it provides the reader with
a few useful and e¬cient tools which should enable her/him to evaluate nontrivial de-
terminants for the case such a determinant should appear in her/his research. Second,
it lists a number of such determinants that have been already evaluated, together with
explanations which tell in which contexts they have appeared. Third, it points out
references where further such determinant evaluations can be found.

1. Introduction
Imagine, you are working on a problem. As things develop it turns out that, in
order to solve your problem, you need to evaluate a certain determinant. Maybe your
determinant is
det , (1.1)

det , (1.2)

or it is possibly
det , (1.3)
2i ’ j

1991 Mathematics Subject Classi¬cation. Primary 05A19; Secondary 05A10 05A15 05A17 05A18
05A30 05E10 05E15 11B68 11B73 11C20 15A15 33C45 33D45.
Key words and phrases. Determinants, Vandermonde determinant, Cauchy™s double alternant,
Pfa¬an, discrete Wronskian, Hankel determinants, orthogonal polynomials, Chebyshev polynomials,
Meixner polynomials, Meixner“Pollaczek polynomials, Hermite polynomials, Charlier polynomials, La-
guerre polynomials, Legendre polynomials, ultraspherical polynomials, continuous Hahn polynomials,
continued fractions, binomial coe¬cient, Genocchi numbers, Bernoulli numbers, Stirling numbers, Bell
numbers, Euler numbers, divided di¬erence, interpolation, plane partitions, tableaux, rhombus tilings,
lozenge tilings, alternating sign matrices, noncrossing partitions, perfect matchings, permutations,
inversion number, major index, descent algebra, noncommutative symmetric functions.

Research partially supported by the Austrian Science Foundation FWF, grants P12094-MAT and

or maybe
x+y+j x+y+j

det . (1.4)
x ’ i + 2j x + i + 2j

Honestly, which ideas would you have? (Just to tell you that I do not ask for something
impossible: Each of these four determinants can be evaluated in “closed form”. If you
want to see the solutions immediately, plus information where these determinants come
from, then go to (2.7), (2.17)/(3.12), (2.19)/(3.30), respectively (3.47).)
Okay, let us try some row and column manipulations. Indeed, although it is not
completely trivial (actually, it is quite a challenge), that would work for the ¬rst two
determinants, (1.1) and (1.2), although I do not recommend that. However, I do not
recommend at all that you try this with the latter two determinants, (1.3) and (1.4). I
promise that you will fail. (The determinant (1.3) does not look much more complicated
than (1.2). Yet, it is.)
So, what should we do instead?
Of course, let us look in the literature! Excellent idea. We may have the problem
of not knowing where to start looking. Good starting points are certainly classics like
[119], [120], [121], [127] and [178]1. This will lead to the ¬rst success, as (1.1) does
indeed turn up there (see [119, vol. III, p. 311]). Yes, you will also ¬nd evaluations for
(1.2) (see e.g. [126]) and (1.3) (see [112, Theorem 7]) in the existing literature. But at
the time of the writing you will not, to the best of my knowledge, ¬nd an evaluation of
(1.4) in the literature.
The purpose of this article is threefold. First, I want to describe a few useful and
e¬cient tools which should enable you to evaluate nontrivial determinants (see Sec-
tion 2). Second, I provide a list containing a number of such determinants that have
been already evaluated, together with explanations which tell in which contexts they
have appeared (see Section 3). Third, even if you should not ¬nd your determinant
in this list, I point out references where further such determinant evaluations can be
found, maybe your determinant is there.
Most important of all is that I want to convince you that, today,
Evaluating determinants is not (okay: may not be) di¬cult!

When George Andrews, who must be rightly called the pioneer of determinant evalua-
tions, in the seventies astounded the combinatorial community by his highly nontrivial
determinant evaluations (solving di¬cult enumeration problems on plane partitions),
it was really di¬cult. His method (see Section 2.6 for a description) required a good
“guesser” and an excellent “hypergeometer” (both of which he was and is). While at
that time especially to be the latter was quite a task, in the meantime both guessing and
evaluating binomial and hypergeometric sums has been largely trivialized, as both can
be done (most of the time) completely automatically. For guessing (see Appendix A)

Turnbull™s book [178] does in fact contain rather lots of very general identities satis¬ed by determi-
nants, than determinant “evaluations” in the strict sense of the word. However, suitable specializations
of these general identities do also yield “genuine” evaluations, see for example Appendix B. Since the
value of this book may not be easy to appreciate because of heavy notation, we refer the reader to
[102] for a clari¬cation of the notation and a clear presentation of many such identities.

this is due to tools like Superseeker2, gfun and Mgfun3 [152, 24], and Rate4 (which is
by far the most primitive of the three, but it is the most e¬ective in this context). For
“hypergeometrics” this is due to the “WZ-machinery”5 (see [130, 190, 194, 195, 196]).
And even if you should meet a case where the WZ-machinery should exhaust your com-
puter™s capacity, then there are still computer algebra packages like HYP and HYPQ6,
or HYPERG7, which make you an expert hypergeometer, as these packages comprise
large parts of the present hypergeometric knowledge, and, thus, enable you to con-
veniently manipulate binomial and hypergeometric series (which George Andrews did
largely by hand) on the computer. Moreover, as of today, there are a few new (perhaps
just overlooked) insights which make life easier in many cases. It is these which form
large parts of Section 2.
So, if you see a determinant, don™t be frightened, evaluate it yourself!

2. Methods for the evaluation of determinants
In this section I describe a few useful methods and theorems which (may) help you
to evaluate a determinant. As was mentioned already in the Introduction, it is always
possible that simple-minded things like doing some row and/or column operations, or
applying Laplace expansion may produce an (usually inductive) evaluation of a deter-
minant. Therefore, you are of course advised to try such things ¬rst. What I am
mainly addressing here, though, is the case where that ¬rst, “simple-minded” attempt
failed. (Clearly, there is no point in addressing row and column operations, or Laplace
Yet, we must of course start (in Section 2.1) with some standard determinants, such
as the Vandermonde determinant or Cauchy™s double alternant. These are of course
In Section 2.2 we continue with some general determinant evaluations that generalize
the evaluation of the Vandermonde determinant, which are however apparently not
equally well-known, although they should be. In fact, I claim that about 80 % of the
determinants that you meet in “real life,” and which can apparently be evaluated, are a
special case of just the very ¬rst of these (Lemma 3; see in particular Theorem 26 and
the subsequent remarks). Moreover, as is demonstrated in Section 2.2, it is pure routine
to check whether a determinant is a special case of one of these general determinants.
Thus, it can be really considered as a “method” to see if a determinant can be evaluated
by one of the theorems in Section 2.2.
the electronic version of the “Encyclopedia of Integer Sequences” [162, 161], written and developed
by Neil Sloane and Simon Plou¬e; see http://www.research.att.com/˜njas/sequences/ol.html
written by Bruno Salvy and Paul Zimmermann, respectively Frederic Chyzak; available from
written in Mathematica by the author; available from http://radon.mat.univie.ac.at/People/kratt;
the Maple equivalent GUESS by Fran¸ois B´raud and Bruno Gauthier is available from
c e
Maple implementations written by Doron Zeilberger are available from
Mathematica implementations written by
Peter Paule, Axel Riese, Markus Schorn, Kurt Wegschaider are available from
written in Mathematica by the author; available from http://radon.mat.univie.ac.at/People/kratt
written in Maple by Bruno Ghauthier; available from http://www-igm.univ-mlv.fr/˜gauthier

The next method which I describe is the so-called “condensation method” (see Sec-
tion 2.3), a method which allows to evaluate a determinant inductively (if the method
In Section 2.4, a method, which I call the “identi¬cation of factors” method, is de-
scribed. This method has been extremely successful recently. It is based on a very
simple idea, which comes from one of the standard proofs of the Vandermonde deter-
minant evaluation (which is therefore described in Section 2.1).
The subject of Section 2.5 is a method which is based on ¬nding one or more di¬eren-
tial or di¬erence equations for the matrix of which the determinant is to be evaluated.
Section 2.6 contains a short description of George Andrews™ favourite method, which
basically consists of explicitly doing the LU-factorization of the matrix of which the
determinant is to be evaluated.
The remaining subsections in this section are conceived as a complement to the pre-
ceding. In Section 2.7 a special type of determinants is addressed, Hankel determinants.
(These are determinants of the form det1¤i,j¤n (ai+j ), and are sometimes also called per-
symmetric or Tur´nian determinants.) As is explained there, you should expect that a
Hankel determinant evaluation is to be found in the domain of orthogonal polynomials
and continued fractions. Eventually, in Section 2.8 a few further, possibly useful results
are exhibited.
Before we ¬nally move into the subject, it must be pointed out that the methods
of determinant evaluation as presented here are ordered according to the conditions a
determinant must satisfy so that the method can be applied to it, from “stringent” to
“less stringent”. I. e., ¬rst come the methods which require that the matrix of which
the determinant is to be taken satis¬es a lot of conditions (usually: it contains a lot of
parameters, at least, implicitly), and in the end comes the method (LU-factorization)
which requires nothing. In fact, this order (of methods) is also the order in which I
recommend that you try them on your determinant. That is, what I suggest is (and
this is the rule I follow):
(0) First try some simple-minded things (row and column operations, Laplace expan-
sion). Do not waste too much time. If you encounter a Hankel-determinant then
see Section 2.7.
(1) If that fails, check whether your determinant is a special case of one of the general
determinants in Sections 2.2 (and 2.1).
(2) If that fails, see if the condensation method (see Section 2.3) works. (If necessary,
try to introduce more parameters into your determinant.)
(3) If that fails, try the “identi¬cation of factors” method (see Section 2.4). Alterna-
tively, and in particular if your matrix of which you want to ¬nd the determinant
is the matrix de¬ning a system of di¬erential or di¬erence equations, try the dif-
ferential/di¬erence equation method of Section 2.5. (If necessary, try to introduce
a parameter into your determinant.)
(4) If that fails, try to work out the LU-factorization of your determinant (see Sec-
tion 2.6).
(5) If all that fails, then we are really in trouble. Perhaps you have to put more e¬orts
into determinant manipulations (see suggestion (0))? Sometimes it is worthwile
to interpret the matrix whose determinant you want to know as a linear map and
try to ¬nd a basis on which this map acts triangularly, or even diagonally (this

requires that the eigenvalues of the matrix are “nice”; see [47, 48, 84, 93, 192] for
examples where that worked). Otherwise, maybe something from Sections 2.8 or
3 helps?
A ¬nal remark: It was indicated that some of the methods require that your deter-
minant contains (more or less) parameters. Therefore it is always a good idea to:
Introduce more parameters into your determinant!
(We address this in more detail in the last paragraph of Section 2.1.) The more param-
eters you can play with, the more likely you will be able to carry out the determinant
evaluation. (Just to mention a few examples: The condensation method needs, at least,
two parameters. The “identi¬cation of factors” method needs, at least, one parameter,
as well as the di¬erential/di¬erence equation method in Section 2.5.)
2.1. A few standard determinants. Let us begin with a short proof of the Van-
dermonde determinant evaluation
(Xj ’ Xi ).
Xij’1 =
det (2.1)

Although the following proof is well-known, it makes still sense to quickly go through
it because, by extracting the essence of it, we will be able to build a very powerful
method out of it (see Section 2.4).
If Xi1 = Xi2 with i1 = i2, then the Vandermonde determinant (2.1) certainly vanishes
because in that case two rows of the determinant are identical. Hence, (Xi1 ’ Xi2 )
divides the determinant as a polynomial in the Xi ™s. But that means that the complete
product 1¤i<j¤n (Xj ’ Xi ) (which is exactly the right-hand side of (2.1)) must divide
the determinant.
On the other hand, the determinant is a polynomial in the Xi ™s of degree at most
. Combined with the previous observation, this implies that the determinant equals
the right-hand side product times, possibly, some constant. To compute the constant,
compare coe¬cients of X1 X2 · · · Xn on both sides of (2.1). This completes the proof
01 n’1

of (2.1).
At this point, let us extract the essence of this proof as we will come back to it in
Section 2.4. The basic steps are:
1. Identi¬cation of factors
2. Determination of degree bound
3. Computation of the multiplicative constant.
An immediate generalization of the Vandermonde determinant evaluation is given by
the proposition below. It can be proved in just the same way as the above proof of the
Vandermonde determinant evaluation itself.
Proposition 1. Let X1 , X2 , . . . , Xn be indeterminates. If p1 , p2 , . . . , pn are polynomials
of the form pj (x) = aj xj’1 + lower terms, then

det (pj (Xi )) = a1 a2 · · · an (Xj ’ Xi ). (2.2)

The following variations of the Vandermonde determinant evaluation are equally easy
to prove.
Lemma 2. The following identities hold true:
Xi’j ) = (X1 · · · Xn )’n
’ (Xi ’ Xj )(1 ’ Xi Xj ) (Xi2 ’ 1),

’ Xi
det (Xi )
= (X1 · · · Xn )’n+1/2 (Xi ’ Xj )(1 ’ Xi Xj ) (Xi ’ 1), (2.4)
1¤i<j¤n i=1

) = 2 · (X1 · · · Xn )’n+1 (Xi ’ Xj )(1 ’ Xi Xj ) ,
det (Xij’1 + Xi (2.5)

det (Xi + Xi )
= (X1 · · · Xn ) (Xi ’ Xj )(1 ’ Xi Xj ) (Xi + 1). (2.6)
1¤i<j¤n i=1

We remark that the evaluations (2.3), (2.4), (2.5) are basically the Weyl denominator
factorizations of types C, B, D, respectively (cf. [52, Lemma 24.3, Ex. A.52, Ex. A.62,
Ex. A.66]). For that reason they may be called the “symplectic”, the “odd orthogonal”,
and the “even orthogonal” Vandermonde determinant evaluation, respectively.
If you encounter generalizations of such determinants of the form det1¤i,j¤n (xi j )
or det1¤i,j¤n (xi j ’ xi j ), etc., then you should be aware that what you encounter is
basically Schur functions, characters for the symplectic groups, or characters for the
orthogonal groups (consult [52, 105, 137] for more information on these matters; see
in particular [105, Ch. I, (3.1)], [52, p. 403, (A.4)], [52, (24.18)], [52, (24.40) + ¬rst
paragraph on p. 411], [137, Appendix A2], [52, (24.28)]). In this context, one has to
also mention Okada™s general results on evaluations of determinants and Pfa¬ans (see
Section 2.8 for de¬nition) in [124, Sec. 4] and [125, Sec. 5].
Another standard determinant evaluation is the evaluation of Cauchy™s double alter-
nant (see [119, vol. III, p. 311]),
’ Xj )(Yi ’ Yj )
1¤i<j¤n (Xi
det = . (2.7)
Xi + Yj (Xi + Yj )

Once you have seen the above proof of the Vandermonde determinant evaluation, you
will immediately know how to prove this determinant evaluation.
On setting Xi = i and Yi = i, i = 1, 2, . . . , n, in (2.7), we obtain the evaluation of our
¬rst determinant in the Introduction, (1.1). For the evaluation of a mixture of Cauchy™s
double alternant and Vandermonde™s determinant see [15, Lemma 2].

Whether or not you tried to evaluate (1.1) directly, here is an important lesson to be
learned (it was already mentioned earlier): To evaluate (1.1) directly is quite di¬cult,
whereas proving its generalization (2.7) is almost completely trivial. Therefore, it is
always a good idea to try to introduce more parameters into your determinant. (That is,
in a way such that the more general determinant still evaluates nicely.) More parameters
mean that you have more objects at your disposal to play with.
The most stupid way to introduce parameters is to just write Xi instead of the row
index i, or write Yj instead of the column index j.8 For the determinant (1.1) even
both simultaneously was possible. For the determinant (1.2) either of the two (but not
both) would work. On the contrary, there seems to be no nontrivial way to introduce
more parameters in the determinant (1.4). This is an indication that the evaluation of
this determinant is in a di¬erent category of di¬culty of evaluation. (Also (1.3) belongs
to this “di¬erent category”. It is possible to introduce one more parameter, see (3.32),
but it does not seem to be possible to introduce more.)
2.2. A general determinant lemma, plus variations and generalizations.
In this section I present an apparently not so well-known determinant evaluation that
generalizes Vandermonde™s determinant, and some companions. As Lascoux pointed
out to me, most of these determinant evaluations can be derived from the evaluation
of a certain determinant of minors of a given matrix due to Turnbull [179, p. 505], see
Appendix B. However, this (these) determinant evaluation(s) deserve(s) to be better
known. Apart from the fact that there are numerous applications of it (them) which I
am aware of, my proof is that I meet very often people who stumble across a special
case of this (these) determinant evaluation(s), and then have a hard time to actually
do the evaluation because, usually, their special case does not show the hidden general
structure which is lurking behind. On the other hand, as I will demonstrate in a mo-
ment, if you know this (these) determinant evaluation(s) then it is a matter completely
mechanical in nature to see whether it (they) is (are) applicable to your determinant
or not. If one of them is applicable, you are immediately done.
The determinant evaluation of which I am talking is the determinant lemma from
[85, Lemma 2.2] given below. Here, and in the following, empty products (like (Xi +
An)(Xi + An’1 ) · · · (Xi + Aj+1 ) for j = n) equal 1 by convention.
Lemma 3. Let X1 , . . . , Xn , A2, . . . , An, and B2 , . . . , Bn be indeterminates. Then there

(Xi + An)(Xi + An’1 ) · · · (Xi + Aj+1 )(Xi + Bj )(Xi + Bj’1 ) · · · (Xi + B2 )

(Xi ’ Xj ) (Bi ’ Aj ). (2.8)
1¤i<j¤n 2¤i¤j¤n

Other common examples of introducing more parameters are: Given that the (i, j)-entry of your
determinant is a binomial such as 2i’j , try x+i+j (that works; see (3.30)), or even x+y+i+j (that
2i’j y+2i’j
x+i+j y+i+j
does not work; but see (1.2)), or 2i’j + 2i’j (that works; see (3.32), and consult Lemma 19
and the remarks thereafter). However, sometimes parameters have to be introduced in an unexpected
way, see (3.49). (The parameter x was introduced into a determinant of Bombieri, Hunt and van der
Poorten, which is obtained by setting x = 0 in (3.49).)

Once you have guessed such a formula, it is easily proved. In the proof in [85] the
determinant is reduced to a determinant of the form (2.2) by suitable column operations.
Another proof, discovered by Amdeberhan (private communication), is by condensation,
see Section 2.3. For a derivation from the above mentioned evaluation of a determinant
of minors of a given matrix, due to Turnbull, see Appendix B.
Now let us see what the value of this formula is, by checking if it is of any use in the

. 1
( : 11)

. . >>