. 1
( : 3)



. . >>

Next Generation Security Software




Quantum Cryptography
A study into the present technologies and future applications


Bill Grindlay
bill@ngssoftware.com
14th January 2003

©2003 Next Generation Security Software Ltd
www.ngssoftware.com
“Anyone who is not dizzy after his first acquaintance with the quantum of action has
not understood a word.”
- Niels Bohr (1885-1962)



“When cryptography is outlawed, bayl bhgynjf jvyy unir cevinpl.”
- Anonymous




2
3
Contents

Introduction .............................................................................................................. 5
A Brief History of Cryptography................................................................................ 6
The Development of Quantum Theory...................................................................... 11
The Advent of Quantum Cryptography..................................................................... 13
Is Quantum Cryptography Secure?.......................................................................... 16
Absolute Security and the Wider Social Issues......................................................... 19
Conclusion .............................................................................................................. 26




4
Introduction

In October 1989 at 3 o™clock in the morning at IBM™s Thomas J. Watson Research
Centre near New York, Dr. Charles Bennett and a research student named John
Smolin witnessed the first ever quantum cryptographic exchange. Using polarised
light photons, computers called Alice and Bob successfully negotiated a completely
secure channel of communication over a distance of 32 centimetres. Quantum
cryptography had now finally moved from the theoretical to the practical arena.

In this report I intend to demonstrate why many scientists now view quantum
cryptography as the first ever completely unbreakable cipher, which will allow people
all over the world to communicate securely and privately.
I shall also consider the implications which this will have on society as a whole,
including potential problems for law enforcement organisations and the possible
impact on civil liberties.




5
A Brief History of Cryptography

Cryptography (derived from the Greek words kryptos and graphein meaning hidden
writing) is the science of codes and ciphers. A cipher is essentially a cryptographic
algorithm which is used to convert a message, known as the plaintext, into unreadable
ciphertext. The message can then be safely transmitted without fear of letting
sensitive information fall into the hands of the enemy.

The first written evidence of cryptography dates back as far as 1900 BC, when an
Egyptian scribe recorded information using a variation of standard hieroglyphics. The
author made use of what is now known as a substitution cipher, meaning that certain
hieroglyphs were substituted for others according to a set of rules. A modern
equivalent of this would be if every letter in the plaintext represented another in the
ciphertext, for instance swap every occurrence of the letter A in a message for B, and
every B for C and so on.

Archaeologists discovered evidence of the first practical application of cryptography
dating from 1500 BC in Mesopotamia (now part of modern day Iraq). A potter had
encoded his formula for pottery glaze onto a clay tablet, presumably to prevent rivals
from stealing it. This is an early example of what is now one of the main non-military
uses for data encryption, protection of intellectual property.

The first military application of cryptography came in the fifth century BC and was
invented by the Spartans of Greece. Their system involved wrapping a strip of leather
around a staff (known as a skytale), writing the message lengthways along the staff
and then removing the leather. The markings on the leather were then unintelligible
to anyone without a matching staff. In a sense the skytale could be said to be the first
cryptographic key, as the only people who could read the message were those who
possessed a staff of exactly the same diameter as the one used to encipher the
message.

Then, as now, the primary motivating factor behind cryptography development and
research has been to keep military communications from the enemy. Julius Caesar
used an early substitution cipher, which now bears his name, for communication with
his generals. The Caesar cipher used a shift of three places along in the alphabet to
convert a plaintext letter to a ciphertext one, so A enciphered to D, B to E and so on.

A political treatise known as The Arthashastra written around 300 BC and attributed
to Kautilya, an Indian court advisor, made the first mention of cryptanalysis
techniques. Cryptanalysis, the art of code-breaking, has developed as a science
parallel to cryptography. Advances in code-making have thrown the onus onto the
code-breakers, and their exposures of vulnerabilities have, in turn, challenged the
cryptographers to produce even better ciphers.




6
During the Dark Ages and Mediaeval periods, many Arab and European scholars
wrote books categorising the various types of cipher which had evolved by that point.
An Arabic encyclopaedia from 1412 defined two types of cipher known as
substitution (where every letter in the plaintext is substituted for another one in the
ciphertext) and transposition (where the position of letters in the plaintext are changed
or new letters added in order to produce the ciphertext). Italian scientist Giovanni
Porta published a book in 1563 which detailed both of these ciphers, plus another
called symbol-substitution (use of a strange alphabet).

The simple substitution cipher was now no longer as secure as it once had been due to
a widely-used cryptanalysis technique known as frequency analysis. Code-breakers,
when given a large enough piece of ciphertext, would detail the frequency with which
each letter occurred. Using this information, and bearing in mind the frequency with
which letters normally occur in standard written English (E is most common,
followed by T, then A etc.) it is possible for the cryptanalyst to make tentative guesses
as to which letters in the ciphertext equate to which in the plaintext. It is quickly
apparent whether these guesses are accurate and using knowledge of other linguistic
characteristics of English (U always comes after Q for example) the substitution
cipher can be relatively easily unravelled.

The response to the insecurity of the substitution cipher came in 1466 with the first
published example of a polyalphabetic cipher by Italian renaissance artist and scientist
Leon Batista Alberti. The problem with the old monoalphabetic substitution cipher
was that one letter in the plaintext was represented by one letter in the ciphertext, and
it was the same mapping every time. The essence of this new type of cipher was the
use of several different letter-to-letter mappings (cipher alphabets) depending on the
position in the plaintext, ensuring that a certain letter in the plaintext did not always
encrypt to the same letter in the ciphertext. The most famous polyalphabetic cipher is
known as the Vigenère cipher, after Blaise de Vigenère a French diplomat and
cryptographer who developed it into its final form. The Vigenère cipher requires a
keyword which is written above the plaintext message repeatedly as shown:


SECRETSEC RETSECRE TSECRET
THEREDCOWFLI ESATMI DNI GHT




7
Letters are enciphered as with other substitution ciphers, but the strength of the
Vigenère cipher is that, unlike the Caesar cipher where the offset is always 3 letters,
the enciphering offset changes depending on which letter of the keyword is above the
plaintext letter. This gives the cipher 26 possible enciphering alphabets, and in order
to find which one should be used for which letter a Vigenère square is used (see
Appendix I). The ciphertext letter is found by looking at the intersection between the
plaintext letter selected at the top, and the keyword letter selected along the left-hand
side. Every time the keyword letter changes, a different horizontal row of the square
is used, and therefore a different cipher alphabet is applied:


SECRET SEC R ETS E CRE TSECRET
THERED COWF LI E S ATMI DNI GHT
LLGI I WUSY WPBWWCKQBVRKXLM



The resultant ciphertext does not bear a direct one-to-one mapping with the plaintext.
For example the three T™s in the plaintext are encoded first as L, then as K and finally
as M. This makes the ciphertext impervious to standard frequency analysis.

The Vigenère method became known as le chiffre ind©chiffrable (the indecipherable
cipher), but of course it was not unbreakable for long. Charles Babbage, an English
inventor and scientist is best known for designing the difference engine, a mechanical
calculation device which was the precursor to the modern digital computer. He was
also interested in codes and ciphers and was the first to realise that the Vigenère
cipher was essentially a series of monoalphabetic ciphers. Babbage™s technique
involved looking for repetitions of character strings in the ciphertext and assuming
that these are the same plaintext letters being enciphered with the same keyword
letters. It is now possible to guess the length of the keyword (n) and thus split the
ciphertext into n number of groups. Standard frequency analysis can now be applied
to each group of ciphertext letters to obtain all n mono-alphabetic cipher alphabets.

Code-making and breaking really came into their own during the First and Second
World Wars. The development of radio communications by Guglielmo Marconi at
the turn of the 19th century generated a huge amount of interest within the military.
However the very fact which made it so appealing, immediate communications over
great distances, also meant that the enemy could easily intercept vast quantities of
information. There was clearly a great need for secure encryption, something which
had not yet been achieved since the successful breaking of the Vigenère cipher.




8
It seemed that for the first time in history the cryptanalysts had comprehensively
gained the upper hand. This was made all the more apparent by the 1917
decipherment by British intelligence of what became known as the Zimmerman
telegram (see Appendix II), which effectively brought America into the First World
War. The telegram was sent by German Foreign Minister Arthur Zimmerman to the
German embassy in Mexico, and promised US territory to Mexico if they allied with
Germany. When the contents of this were made public, American popular opinion,
previously fairly neutral, became decidedly pro-war and the US declared war against
Germany within a month. This event is still held by many experts to be the single
most historic decipherment ever made.

In the same year as the Zimmerman telegram made code-breaking history, Gilbert
Vernam, an employee of AT&T in America, built a machine to generate pseudo-
random strings of characters for use as encryption keys in a cipher called a one-time
pad (also referred to as the Vernam cipher). Before the advent of quantum
cryptography the one-time pad was widely regarded as the only cryptosystem which
guaranteed absolute security “ often termed the ˜holy grail™ of cryptography. The
one-time pad uses a random key of the same length as the plaintext, if the message is
intercepted the cryptanalyst has absolutely nothing to give them a foothold. The
pseudo-random encryption key means that there is no point looking for repeating
patterns, so essentially the only information an attacker has is that if the ciphertext is n
letters long, the plaintext must be one of all the possible messages of length n.

One-time pads are currently only used when security is of the utmost importance, for
example to encode the telephone line between Downing Street and the White House;
this is due to the problem of key distribution. For a one-time pad cryptosystem to
work effectively both the receiver and the sender need identical sets of strings of
random characters to use as encryption keys. New keys must be physically
exchanged on a regular basis to avoid key reuse, a very time consuming operation if
the correspondents are geographically distant.

Key distribution was a catch-22 situation for all ciphers regardless of security; if two
people wanted to communicate secretly they had to share a secret key, which itself
could not be exchanged unless they could communicate secretly. After the Second
World War with the introduction of increasingly complex ciphers thanks to the digital
computer, key distribution began to be seen as the weakest link in the chain. In the
1970s the American agency COMSEC managed encryption keys for the US
government and transported tonnes of paper, cards and floppy disks under armed
guard. If the US government was spending this amount of resources and money on
ensuring secrecy, many people reasoned, what chance did civilians ever have of
privacy?




9
The cryptography breakthrough1 was made in 1975 by Whitfield Diffie, a research
cryptographer and mathematician, who envisaged what is now known as public key
cryptography. He developed an algorithm with a colleague, Martin Hellman, which
allowed two people, who may have never met, to negotiate a secret key over a public
communications channel. This was not a complete solution however as it still
required that both parties had to be online at the same time. There was also no way
that both parties could be sure they were actually talking to each other, and not to an
impostor.

In 1977, encouraged by Diffie™s efforts, three researchers from the Massachusetts
Institute of Technology published their paper on the world™s first asymmetric cipher
named RSA (after the authors Rivest, Shamir and Adleman). The essence of
asymmetry is that different keys are used to encrypt and decrypt the message. A pair
of keys, known as public and secret keys, is generated by every user of the system.
The public key is made available to everyone and the secret key is kept secret. A
message to a user is encrypted with the recipient™s public key by the sender, and then
decrypted by the recipient with their secret key.

The effect of the RSA algorithm was to revolutionise cryptography. For the first time
ever two people could communicate securely across public networks without the prior
need to establish a shared secret key. Messages encrypted using RSA are not
unbreakable, but the conventional computing power required to derive the secret key
from the public key is so vast as to make it totally unfeasible. Claims that the US
National Security Agency (NSA), owner of the most processing power in the world,
can break popular encryption algorithms such as DES (Digital Encryption Standard)
are, as yet, unproven. As William Crowell, Deputy Director of the NSA, succinctly
put it:

“If all the personal computers in the world ’ approximately 260 million
computers ’ were to be put to work on a single [public-key] encrypted
message, it would take on average an estimated 12 million times the age of the
universe to break a single message”.

The future possibility of quantum computers, with the ability to break RSA within
minutes, would certainly cast doubt on the security of the cipher. Cryptographic
research has recently turned to harnessing quantum physics instead to provide a cipher
which provides a secure means of key exchange plus the absolute security of the one-
time pad, immune to the code-breaking attempts of a quantum computer. Is quantum
cryptography the final unbreakable cipher in the history of code-making?




1
It is now widely accepted that a complete system of public-key cryptography was in fact first
developed in 1975 by James Ellis, Clifford Cocks and Malcolm Williamson, a team working at the
UK™s Government Communications Headquarters (GCHQ) but owing to military secrecy issues they
were not allowed to publicise their work at the time.

10
The Development of Quantum Theory

A quantum theory of matter began to be developed around the beginning of the 20th
century, in response to a series of unexpected experimental results which did not
conform to the previously accepted Newtonian model of the universe. The essence of
quantum theory is the realisation that elementary particles (electrons, protons,
neutrons etc.) also have the ability to behave as waves. A test which neatly
demonstrates this peculiar behaviour, known as wave/particle duality, in light photons
is the twin slit interference experiment. If light is directed at two slits in a screen the
waves will radiate outwards as shown below in Fig.1:




Fig.1: Light waves in the twin slit interference experiment

The light waves will interfere with each other in a very similar way to ripples in
water. Where two peaks meet, an even greater peak (more light) is created, and where
two troughs meet, an even lower trough (less light) is formed. When the light hits the
back screen an ˜interference pattern™ is created:




Fig.2: Pattern formed on screen showing differing intensities caused by light wave interference




11
So far this conforms to the accepted Newtonian universe model; but it was found that
if the light was instead used to repeatedly emit just a single photon (a quantum of
light) over a period of time, exactly the same interference pattern was formed on the
screen. This was a startling result, and completely challenged conventional physics.
When it is emitted the photon either goes through one slit, or the other. In order to
produce an interference pattern the photon must have somehow affected itself.

Two schools of thought have developed to explain the results of the twin slit
interference experiment, both holding opinions which seem to defy common sense.
The first theory suggests that, as the exact behaviour of each photon is unknown, it is
reasonable to believe that the photon can perform all possibilities simultaneously,
passing through both slits. Each possibility is called a state, and as long as the
photons are not observed, the experiment is said to be in a superposition of states.
The second camp instead posits the idea that at the moment when the photon has the
choice between slits the universe divides into two and in one universe the photon
passes through the left slit and in the other it passes through the right one. This
plurality of universes is known as the multiverse.

Even though it is not yet fully understood exactly how the results of the experiment
occur, they cannot be disputed. Quantum theory has now found applications in many
areas of electronics, such as computer processors and lower power consumption lasers
in compact disc players.

A particularly exciting application of quantum mechanics which is still currently in
the theoretical stage is that of quantum computing. Conventional computers use
binary digits (bits) set to either one or zero to perform calculations. Quantum
computers, it has been proposed, could use electrons spinning either clockwise or
anti-clockwise to represent ones and zeroes (qubits). If these are in a superposition of
states, and not observed, all the possible states will be evaluated simultaneously and
the correct answer to the calculation obtained in a fraction of the time it would have
taken a standard computer. This promised leap in processing power is a real threat to
the security of all currently existing ciphers. The current effectiveness of RSA could
be eliminated at a stroke, so clearly there is a pressing need to pre-emptively develop
a more resilient cipher.




12
The Advent of Quantum Cryptography

Shortly before British physicist David Deutsch published the first paper2 proposing
quantum computers in 1985, cryptologists had united quantum theory with code-
making. In the early 1980s two computer scientists, Charles Bennett a researcher for
IBM, and Gilles Brassard from the University of Montreal, realised that the
application of quantum theory in the field of cryptography could have the potential to
create a cipher giving absolute security for eternity. Initial work was hampered by the
ubiquitous problem of key distribution; if a conventional key-exchange system was
used, such as RSA, any security would be quickly lost to a brute-force attack using a
quantum computer.

The cryptosystem developed by Bennett and Brassard uses polarised light photons to
transfer data between two points. As a photon travels through space it vibrates
perpendicularly to its plane of movement, the direction of vibration is known as its
polarisation. For the sake of simplicity I have restricted the depicted directions of
vibration to horizontal and vertical, although in actuality the photons will also move
in all angles in between those shown:




Fig.3: Diagram showing vertically and horizontally polarised light



Polarising filters can be created from plastic polymers which will only allow light of a
certain polarisation through. These filters, or Polaroids, will block photons of a
diametrically opposite polarisation, but will allow those with the same polarisation
through. If a light photon has up to a 45˚ difference in polarisation then the photon
faces a quantum decision, and approximately half of the photons will make it through
and half will be blocked.




2
This influential paper is available online at the Centre for Quantum Computation:
http://www.qubit.org/oldsite/resource/deutsch85.pdf

13
Bennett and Brassard™s proposed scheme takes advantage of the fact that an observer
will have no idea which angle of polarising filter should be used for a certain photon
to pass successfully through. Supposing that the binary ones and zeroes of digital
communication are represented respectively by, in one scheme vertically (•) and
horizontally (”) (rectilinearly) polarised photons, and in the other left-diagonally ( )
and right-diagonally ( ) polarised photons. The sender of the message will randomly

. 1
( : 3)



. . >>