<< . .

. 2
( : 11)



. . >>

case of the second determinant in the Introduction, (1.2). The recipe that you should
follow is:
1. Take as many factors out of rows and/or columns of your determinant, so that all
denominators are cleared.
2. Compare your result with the determinant in (2.8). If it matches, you have found
the evaluation of your determinant.
Okay, let us do so:
n
a+b (a + b)!
det =
a’i+j (a ’ i + n)! (b + i ’ 1)!
1¤i,j¤n
i=1
— det (a ’ i + n)(a ’ i + n ’ 1) · · · (a ’ i + j + 1)
1¤i,j¤n

· (b + i ’ j + 1)(b + i ’ j + 2) · · · (b + i ’ 1)
n
(a + b)!
n
= (’1)( 2 )
(a ’ i + n)! (b + i ’ 1)!
i=1
— det (i ’ a ’ n)(i ’ a ’ n + 1) · · · (i ’ a ’ j ’ 1)
1¤i,j¤n

· (i + b ’ j + 1)(i + b ’ j + 2) · · · (i + b ’ 1) .

Now compare with the determinant in (2.8). Indeed, the determinant in the last line is
just the special case Xi = i, Aj = ’a ’ j, Bj = b ’ j + 1. Thus, by (2.8), we have a
result immediately. A particularly attractive way to write it is displayed in (2.17).
Applications of Lemma 3 are abundant, see Theorem 26 and the remarks accompa-
nying it.
In [87, Lemma 7], a determinant evaluation is given which is closely related to
Lemma 3. It was used there to establish enumeration results about shifted plane par-
titions of trapezoidal shape. It is the ¬rst result in the lemma below. It is “tailored”
for the use in the context of q-enumeration. For plain enumeration, one would use the
second result. This is a limit case of the ¬rst (replace Xi by q Xi , Aj by ’q ’Aj and C
by q C in (2.9), divide both sides by (1 ’ q)n(n’1) , and then let q ’ 1).
Lemma 4. Let X1 , X2 , . . . , Xn , A2, . . . , An be indeterminates. Then there hold

(C/Xi + An )(C/Xi + An’1 ) · · · (C/Xi + Aj+1 )
det
1¤i,j¤n
n
· (Xi + An )(Xi + An’1 ) · · · (Xi + Aj+1 ) = (Xi ’ Xj )(1 ’ C/Xi Xj ),
Ai’1
i
i=2 1¤i<j¤n
(2.9)
ADVANCED DETERMINANT CALCULUS 9

and

(Xi ’ An ’ C)(Xi ’ An’1 ’ C) · · · (Xi ’ Aj+1 ’ C)
det
1¤i,j¤n

· (Xi + An )(Xi + An’1 ) · · · (Xi + Aj+1 ) = (Xj ’ Xi )(C ’ Xi ’ Xj ). (2.10)
1¤i<j¤n




(Both evaluations are in fact special cases in disguise of (2.2). Indeed, the (i, j)-entry
of the determinant in (2.9) is a polynomial in Xi + C/Xi , while the (i, j)-entry of the
determinant in (2.10) is a polynomial in Xi ’ C/2, both of degree n ’ j.)
The standard application of Lemma 4 is given in Theorem 27.
In [88, Lemma 34], a common generalization of Lemmas 3 and 4 was given. In order
to have a convenient statement of this determinant evaluation, we de¬ne the degree
of a Laurent polynomial p(X) = N aixi , M, N ∈ Z, ai ∈ R and aN = 0, to be
i=M
deg p := N.
Lemma 5. Let X1 , X2 , . . . , Xn , A2, A3, . . . , An, C be indeterminates. If p0 , p1 , . . . , pn’1
are Laurent polynomials with deg pj ¤ j and pj (C/X) = pj (X) for j = 0, 1, . . . , n ’ 1,
then

(Xi + An )(Xi + An’1 ) · · · (Xi + Aj+1 )
det
1¤i,j¤n

· (C/Xi + An)(C/Xi + An’1 ) · · · (C/Xi + Aj+1 ) · pj’1 (Xi )
n n
(Xi ’ Xj )(1 ’ C/Xi Xj ) Ai’1
= pi’1 (’Ai) . (2.11)
i
1¤i<j¤n i=1 i=1




Section 3 contains several determinant evaluations which are implied by the above
determinant lemma, see Theorems 28, 30 and 31.
Lemma 3 does indeed come out of the above Lemma 5 by setting C = 0 and
j
pj (X) = (Bk+1 + X).
k=1

Obviously, Lemma 4 is the special case pj ≡ 1, j = 0, 1, . . . , n ’ 1. It is in fact worth
stating the C = 0 case of Lemma 5 separately.
Lemma 6. Let X1 , X2 , . . . , Xn , A2, A3, . . . , An be indeterminates. If p0 , p1 , . . . , pn’1 are
polynomials with deg pj ¤ j for j = 0, 1, . . . , n ’ 1, then

(Xi + An )(Xi + An’1 ) · · · (Xi + Aj+1 ) · pj’1 (Xi )
det
1¤i,j¤n
n
(Xi ’ Xj )
= pi’1 (’Ai) . (2.12)
1¤i<j¤n i=1
10 C. KRATTENTHALER

Again, Lemma 5 is tailored for applications in q-enumeration. So, also here, it may
be convenient to state the according limit case that is suitable for plain enumeration
(and perhaps other applications).

Lemma 7. Let X1 , X2 , . . . , Xn , A2, A3, . . . , An, C be indeterminates. If p0 , p1 , . . . ,
pn’1 are polynomials with deg pj ¤ 2j and pj (C ’ X) = pj (X) for j = 0, 1, . . . , n ’ 1,
then


(Xi + An )(Xi + An’1 ) · · · (Xi + Aj+1 )
det
1¤i,j¤n

· (Xi ’ An ’ C)(Xi ’ An’1 ’ C) · · · (Xi ’ Aj+1 ’ C) · pj’1 (Xi )
n
(Xj ’ Xi )(C ’ Xi ’ Xj )
= pi’1 (’Ai) . (2.13)
1¤i<j¤n i=1




In concluding, I want to mention that, now since more than ten years, I have a
di¬erent common generalization of Lemmas 3 and 4 (with some overlap with Lemma 5)
in my drawer, without ever having found use for it. Let us nevertheless state it here;
maybe it is exactly the key to the solution of a problem of yours.

Lemma 8. Let X1 , . . . , Xn , A2, . . . , An, B2 , . . . Bn , a2, . . . , an , b2 , . . . bn , and C be in-
determinates. Then there holds

«± 
(Xi + An ) · · · (Xi + Aj+1 )(C/Xi + An ) · · · (C/Xi + Aj+1 )

¬(X + B ) · · · (X + B )(C/X + B ) · · · (C/X + B )
 j < m·
¬ ·
i j i 2 i j i 2
det ¬ ·
1¤i,j¤n (Xi + an ) · · · (Xi + aj+1 )(C/Xi + an ) · · · (C/Xi + aj+1 ) 



(Xi + bj ) · · · (Xi + b2 )(C/Xi + bj ) · · · (C/Xi + b2) j≥m

(Xi ’ Xj )(1 ’ C/Xi Xj ) (Bi ’ Aj )(1 ’ C/Bi Aj )
=
1¤i<j¤n 2¤i¤j¤m’1
m n
— (bi ’ Aj )(1 ’ C/bi Aj ) (bi ’ aj )(1 ’ C/bi aj )
i=2 j=m m+1¤i¤j¤n
m n m’1 n
— (Ai · · · An ) (ai · · · an) (B2 · · · Bi ) (b2 · · · bi). (2.14)
i=2 i=m+1 i=2 i=m




The limit case which goes with this determinant lemma is the following. (There is
some overlap with Lemma 7.)
ADVANCED DETERMINANT CALCULUS 11

Lemma 9. Let X1 , . . . , Xn, A2, . . . , An, B2 , . . . , Bn, a2, . . . , an , b2, . . . , bn , and C be
indeterminates. Then there holds
«± 
(Xi + An ) · · · (Xi + Aj+1 )(Xi ’ An ’ C) · · · (Xi ’ Aj+1 ’ C)


¬(X + B ) · · · (X + B )(X ’ B ’ C) · · · (X ’ B ’ C)
 j < m·
¬ ·
i j i 2 i j i 2
det ¬ ·
1¤i,j¤n (Xi + an ) · · · (Xi + aj+1 )(Xi ’ an ’ C) · · · (Xi ’ aj+1 ’ C) 



(Xi + bj ) · · · (Xi + b2 )(Xi ’ bj ’ C) · · · (Xi ’ b2 ’ C) j≥m

(Xi ’ Xj )(C ’ Xi ’ Xj ) (Bi ’ Aj )(Bi + Aj + C)
=
1¤i<j¤n 2¤i¤j¤m’1
m n
— (bi ’ Aj )(bi + Aj + C) (bi ’ aj )(bi + aj + C). (2.15)
i=2 j=m m+1¤i¤j¤n



If you are looking for more determinant evaluations of such a general type, then you
may want to look at [156, Lemmas A.1 and A.11] and [158, Lemma A.1].
2.3. The condensation method. This is Doron Zeilberger™s favourite method. It
allows (sometimes) to establish an elegant, e¬ortless inductive proof of a determinant
evaluation, in which the only task is to guess the result correctly.
The method is often attributed to Charles Ludwig Dodgson [38], better known as
Lewis Carroll. However, the identity on which it is based seems to be actually due to
P. Desnanot (see [119, vol. I, pp. 140“142]; with the ¬rst rigorous proof being probably
due to Jacobi, see [18, Ch. 4] and [79, Sec. 3]). This identity is the following.
Proposition 10. Let A be an n — n matrix. Denote the submatrix of A in which rows
i1, i2, . . . , ik and columns j1 , j2 , . . . , jk are omitted by Aj1 ,i2 ,...,ikk . Then there holds
1 ,j2 ,...,j
i

det A · det A1,n = det A1 · det An ’ det An · det A1 . (2.16)
1,n 1 n 1 n



So, what is the point of this identity? Suppose you are given a family (det Mn )n≥0
of determinants, Mn being an n — n matrix, n = 0, 1, . . . . Maybe Mn = Mn (a, b)
is the matrix underlying the determinant in (1.2). Suppose further that you have
already worked out a conjecture for the evaluation of det Mn (a, b) (we did in fact already
evaluate this determinant in Section 2.2, but let us ignore that for the moment),
n a b
i+j+k’1
a+b ?
det Mn (a, b) := det = . (2.17)
a’i+j i+j+k’2
1¤i,j¤n
i=1 j=1 k=1

Then you have already proved your conjecture, once you observe that
n
Mn (a, b) = Mn’1 (a, b),
n
1
Mn(a, b) 1 = Mn’1 (a, b),
1
= Mn’1 (a + 1, b ’ 1),
Mn (a, b) n
n
= Mn’1 (a ’ 1, b + 1),
Mn (a, b) 1
1,n
Mn (a, b) 1,n = Mn’2 (a, b). (2.18)
12 C. KRATTENTHALER

For, because of (2.18), Desnanot™s identity (2.16), with A = Mn (a, b), gives a recur-
rence which expresses det Mn (a, b) in terms of quantities of the form det Mn’1 ( . ) and
det Mn’2 ( . ). So, it just remains to check the conjecture (2.17) for n = 0 and n = 1, and
that the right-hand side of (2.17) satis¬es the same recurrence, because that completes
a perfect induction with respect to n. (What we have described here is basically the
contents of [197]. For a bijective proof of Proposition 10 see [200].)
Amdeberhan (private communication) discovered that in fact the determinant evalu-
ation (2.8) itself (which we used to evaluate the determinant (1.2) for the ¬rst time) can
be proved by condensation. The reader will easily ¬gure out the details. Furthermore,
the condensation method also proves the determinant evaluations (3.35) and (3.36).
(Also this observation is due to Amdeberhan [2].) At another place, condensation was
used by Eisenk¨lbl [41] in order to establish a conjecture by Propp [138, Problem 3]
o
about the enumeration of rhombus tilings of a hexagon where some triangles along the
border of the hexagon are missing.
The reader should observe that crucial for a successful application of the method is
the existence of (at least) two parameters (in our example these are a and b), which
help to still stay within the same family of matrices when we take minors of our original
matrix (compare (2.18)). (See the last paragraph of Section 2.1 for a few hints of how
to introduce more parameters into your determinant, in the case that you are short of
parameters.) Obviously, aside from the fact that we need at least two parameters, we
can hope for a success of condensation only if our determinant is of a special kind.
2.4. The “identification of factors” method. This is the method that I ¬nd
most convenient to work with, once you encounter a determinant that is not amenable
to an evaluation using the previous recipes. It is best to explain this method along with
an example. So, let us consider the determinant in (1.3). Here it is, together with its,
at this point, unproven evaluation,
µ+i+j
det
2i ’ j
0¤i,j¤n’1

’µ ’ 3n + i + 3
n’1 (µ + i + 1) (i+1)/2
n’1 2
2( ) i/2
= (’1)χ(n≡3 mod 4)
, (2.19)
2
(i)i
i=1

where χ(A) = 1 if A is true and χ(A) = 0 otherwise, and where the shifted factorial
(a)k is de¬ned by (a)k := a(a + 1) · · · (a + k ’ 1), k ≥ 1, and (a)0 := 1.
As was already said in the Introduction, this determinant belongs to a di¬erent
category of di¬culty of evaluation, so that nothing what was presented so far will
immediately work on that determinant.
Nevertheless, I claim that the procedure which we chose to evaluate the Vandermonde
determinant works also with the above determinant. To wit:
1. Identi¬cation of factors
2. Determination of degree bound
3. Computation of the multiplicative constant.
You will say: ˜A moment please! The reason that this procedure worked so smoothly
for the Vandermonde determinant is that there are so many (to be precise: n) variables
at our disposal. On the contrary, the determinant in (2.19) has exactly one (!) variable.™
ADVANCED DETERMINANT CALCULUS 13

Yet ” and this is the point that I want to make here ” it works, in spite of having
just one variable at our disposal!.
What we want to prove in the ¬rst step is that the right-hand side of (2.19) divides the
determinant. For example, we would like to prove that (µ + n) divides the determinant
(actually, (µ + n) (n+1)/3 , we will come to that in a moment). Equivalently, if we set
µ = ’n in the determinant, then it should vanish. How could we prove that? Well, if
it vanishes then there must be a linear combination of the columns, or of the rows, that
vanishes. So, let us ¬nd such a linear combination of columns or rows. Equivalently, for
µ = ’n we ¬nd a vector in the kernel of the matrix in (2.19), respectively its transpose.
More generally (and this addresses that we actually want to prove that (µ + n) (n+1)/3
divides the determinant):

For proving that (µ + n)E divides the determinant, we ¬nd E linear inde-
pendent vectors in the kernel.

(For a formal justi¬cation that this does indeed su¬ce, see Section 2 of [91], and in
particular the Lemma in that section.)
Okay, how is this done in practice? You go to your computer, crank out these vectors
in the kernel, for n = 1, 2, 3, . . . , and try to make a guess what they are in general.
To see how this works, let us do it in our example. What the computer gives is the
following (we are using Mathematica here):
In[1]:= V[2]
Out[1]= {0, c[1]}
In[2]:= V[3]
Out[2]= {0, c[2], c[2]}
In[3]:= V[4]
Out[3]= {0, c[1], 2 c[1], c[1]}
In[4]:= V[5]
Out[4]= {0, c[1], 3 c[1], c[3], c[1]}
In[5]:= V[6]
Out[5]= {0, c[1], 4 c[1], 2 c[1] + c[4], c[4], c[1]}
In[6]:= V[7]
Out[6]= {0, c[1], 5 c[1], c[3], -10 c[1] + 2 c[3], -5 c[1] + c[3], c[1]}
In[7]:= V[8]
Out[7]= {0, c[1], 6 c[1], c[3], -25 c[1] + 3 c[3], c[5], -9 c[1] + c[3], c[1]}
In[8]:= V[9]
Out[8]= {0, c[1], 7 c[1], c[3], -49 c[1] + 4 c[3],

-28 c[1] + 2 c[3] + c[6], c[6], -14 c[1] + c[3], c[1]}
In[9]:= V[10]
14 C. KRATTENTHALER

Out[9]= {0, c[1], 8 c[1], c[3], -84 c[1] + 5 c[3], c[5],

196 c[1] - 10 c[3] + 2 c[5], 98 c[1] - 5 c[3] + c[5], -20 c[1] + c[3],

c[1]}
In[10]:= V[11]
Out[10]= {0, c[1], 9 c[1], c[3], -132 c[1] + 6 c[3], c[5],

648 c[1] - 25 c[3] + 3 c[5], c[7], 234 c[1] - 9 c[3] + c[5],

-27 c[1] + c[3], c[1]}

Here, V [n] is the generic vector (depending on the indeterminates c[i]) in the kernel of
the matrix in (2.19) with µ = ’n. For convenience, let us denote this matrix by Mn .
You do not have to stare at these data for long to see that, in particular,
the vector (0, 1) is in the kernel of M2 ,
the vector (0, 1, 1) is in the kernel of M3,
the vector (0, 1, 2, 1) is in the kernel of M4 ,
the vector (0, 1, 3, 3, 1) is in the kernel of M5 (set c[1] = 1 and c[3] = 3),
the vector (0, 1, 4, 6, 4, 1) is in the kernel of M6 (set c[1] = 1 and c[4] = 4), etc.
Apparently,
n’2 n’2 n’2 n’2
0, , , ,..., (2.20)
0 1 2 n’2
is in the kernel of Mn . That was easy! But we need more linear combinations. Take
a closer look, and you will see that the pattern persists (set c[1] = 0 everywhere, etc.).
It will take you no time to work out a full-¬‚edged conjecture for (n + 1)/3 linear
independent vectors in the kernel of Mn.
Of course, there remains something to be proved. We need to actually prove that our
guessed vectors are indeed in the kernel. E.g., in order to prove that the vector (2.20)
is in the kernel, we need to verify that
n’1
n’2 ’n + i + j
=0
j’1 2i ’ j
j=1

for i = 0, 1, . . . , n ’ 1. However, verifying binomial identities is pure routine today, by
means of Zeilberger™s algorithm [194, 196] (see Footnote 5 in the Introduction).
Next you perform the same game with the other factors of the right-hand side product
of (2.19). This is not much more di¬cult. (See Section 3 of [91] for details. There,
slightly di¬erent vectors are used.)
Thus, we would have ¬nished the ¬rst step, “identi¬cation of factors,” of our plan: We
have proved that the right-hand side of (2.19) divides the determinant as a polynomial
in µ.
The second step, “determination of degree bound,” consists of determining the (max-
imal) degree in µ of determinant and conjectured result. As is easily seen, this is n 2
in each case.
The arguments thus far show that the determinant in (2.19) must equal the right-
hand side times, possibly, some constant. To determine this constant in the third
n
step, “computation of the multiplicative constant,” one compares coe¬cients of x( 2 ) on
ADVANCED DETERMINANT CALCULUS 15

both sides of (2.19). This is an enjoyable exercise. (Consult [91] if you do not want
to do it yourself.) Further successful applications of this procedure can be found in
[27, 30, 42, 89, 90, 92, 94, 97, 132].
Having done that, let me point out that most of the individual steps in this sort of
calculation can be done (almost) automatically. In detail, what did we do? We had to
1. Guess the result. (Indeed, without the result we could not have got started.)
2. Guess the vectors in the kernel.
3. Establish a binomial (hypergeometric) identity.
4. Determine a degree bound.
5. Compute a particular value or coe¬cient in order to determine the multiplicative

<< . .

. 2
( : 11)



. . >>