Historically, cryptography was used to assure only secrecy. Wax seals, signatures, and other physical mechanisms were typically used to assure integrity of the media and authenticity of the sender. With the advent of electronic funds transfer, the applications of cryptography for integrity began to surpass its use for secrecy. Electronic cash came into being from cryptography, and the electronic credit card and debit card sprung into widespread use. The advent of public key cryptography introduced the possibility of digital signatures, and other related concepts such as electronic credentials. In the information age, cryptography has become one of the major methods for protection in all applications.
Cryptographic protocols have only recently come under intensive study, and as of this time, they are not sufficiently well developed to provide a great deal of assurance. There are several protocols that offer provable properties, primarily those designed for use with the OTP. The problem with proving properties of protocols under other schemes is that the mathematics is extremely complex for the RSA, and there is no sound mathematical basis for the DES. Much research is under way at this time in the field of protocol analysis and verification, and it is likely that once this field stabilizes, cryptographic protocols will follow suit.
Several special purpose cryptographic protocols have been developed and demonstrated sound. Most notably, the RSA key distribution protocol, a public key poker playing protocol, an OTP based dining cryptographers protocol, and the protocol used for verifying the nuclear test ban treaty [Chaum83] [Simmons81] .
A typical cryptographic protocol failure is encountered in the use of the RSA [Jonge85] . It seems that if an attacker can choose the plaintext to be signed under an RSA signature system, observe the result of the signature, and then iterate the process, it is possible to get the signer to reveal the private key in a very small number of signatures (about 1 signature per bit of the key). Thus an unmodified RSA signature system requires a sound protocol to be safely used.
Most current secrecy systems for transmission use a private key system for transforming transmitted information because it is the fastest method that operates with reasonable assurance and low overhead.
If the number of communicating parties is small, key distribution is done periodically with a courier service and key maintenence is based on physical security of the keys over the period of use and destruction after new keys are distributed.
If the number of parties is large, electronic key distribution is usually used. Historically, key distribution was done with a special key-distribution-key (also known as a master-key) maintained by all parties in secrecy over a longer period of time than the keys used for a particular transaction. The "session-key" is generated at random either by one of the parties or by a trusted third party and distributed using the master-key.
The problem with master-key systems is that if the master-key is successfully attacked, the entire system collapses. Similarly, if any of the parties under a given master-key decides to attack the system, they can forge or intercept all messages throughout the entire system. Many complex private-key systems for reducing some of these problems have been proposed and used for various applications.
With the advent of public-key systems, secrecy can be maintained without a common master-key or a large number of keys. Instead, if Bob wants to communicate with Alice, Bob sends Alice a session-key encrypted with Alice's public key. Alice decrypts the session-key and uses that over the period of the transaction.
These are examples of cryptographic protocols, methods for communicating while attaining a particular cryptographic objective. These protocols are used primarily to deal with key management and system misuse problems. Many other protocols are applied to eliminate other attacks on these systems.
Secrecy in storage is usually maintained by a one-key system where the user provides the key to the computer at the beginning of a session, and the system then takes care of encryption and decryption throughout the course of normal use. As an example, many hardware devices are available for personal computers to automatically encrypt all information stored on disk. When the computer is turned on, the user must supply a key to the encryption hardware. The information cannot be read meaningfully without this key, so even if the disk is stolen, the information on it will not be useable.
Secrecy in storage has its problems. If the user forgets a key, all of the information encrypted with it becomes permanently unuseable. The information is only encrypted while in storage, not when in use by the user. This leaves a major hole for the attacker. If the encryption and decryption are done in software, or if the key is stored somewhere in the system, the system may be circumvented by an attacker. Backups of encrypted information are often stored in plaintext because the encryption mechanism is only applied to certain devices.
Many of the users of communication systems are not as much concerned about secrecy as about integrity. In an electronic funds transfer, the amount sent from one account to another is often public knowledge. What the bank cares about is that only proper transfers can take place. If an active tapper could introduce a false transfer, funds would be moved illicitly. An error in a single bit could literally cause millions of dollars to be erroneously credited or debited. Cryptographic techniques are widely used to assure that intentional or accidental modification of transmitted information does not cause erroneous actions to take place.
A typical technique for assuring integrity is to perform a checksum of the information being transmitted and transmit the checksum in encrypted form. Once the information and encrypted checksum are received, the information is again checksummed and compared to the transmitted checksum after decryption. If the checksums agree, there is a high probability that the message is unaltered. Unfortunately, this scheme is too simple to be of practical value as it is easily forged. The problem is that the checksum of the original message is immediately apparent and a plaintext message with an identical checksum can be easily forged. Designing strong cryptographic checksums is therefore important to the assurance of integrity in systems of this sort.
The key distribution problem in a one-key system is as before, but an interesting alternative is presented by the use of public keys. If we generate a single public-key for the entire system and throw away the private key that would go with it, we can make the checksum impossible to decrypt. In order to verify the original message, we simply generate a new checksum, encrypt with the public key, and verify that the encrypted checksum matches. This is known as a one-way function because it is hard to invert.
Actual systems of this sort use high quality cryptographic checksums and complext key distribution and maintenance protocols, but there is a trend towards the use of public keys for key maintenance.
Integrity against random noise has been the subject of much study in the fields of fault tolerant computing and coding theory, but only recently has the need for integrity of stored information against intentional attack become a matter for cryptography.
The major mean of assuring integrity of stored information has historically been access control. Access control includes systems of locks and keys, guards, and other mechanisms of a physical or logical nature. The recent advent of computer viruses has changed this to a significant degree, and the use of cryptographic checksums for assuring the integrity of stored information is now becoming widespread.
As in the case of integrity in transmission, a cryptographic checksum is produced and compared to expectations, but storage media tends to have different properties than transmission media. Transmitted information is typically more widely available over a shorter period of time, used for a relatively low volume of information, and accessed at a slower rate than stored information. Thees parameters cause different tradeoffs in how cryptosystems are used.
Authenticating the identity of individuals or systems to each other has been a problem for a very long time. Simple passwords have been used for thousands of years to prove identity. More complex protocols such as sequences of keywords exchanged between sets of parties are often shown in the movies or on television. Cryptography is closely linked to the theory and practice of using passwords, and modern systems often use strong cryptographic transforms in conjunction with physical properties of individuals and shared secrets to provide highly reliable authentication of identity.
Determining good passwords falls into the field known as key selection. In essence, a password can be thought of as a key to a cryptosystem that allows encryption and decryption of everything that the password allows access to. In fact, password systems have been implemented in exactly this way in some commercial products.
The selection of keys has historically been a cause of cryptosystem failure. Although we know from Shannon that H(K) is maximized for a key chosen with an equal probability of each possible value (i.e. at random), in practice when people choose keys, they choose them to make them easy to remember, and therefore not at random. This is most dramatically demonstrated in the poor selection that people make of passwords.
On many systems, passwords are stored in encrypted form with read access available to all so that programs wishing to check passwords needn't be run by privileged users. A side benefit is that the plaintext passwords don't appear anywhere in the system, so an accidental leak of information doesn't compromise system wide protection.
A typical algorithm for transforming any string into an encrypted password is designed so that it takes 10 or more msec/transformation to encode a string. By simple calculation, if only capital letters were allowed in a password, it would take .26 seconds to check all the one letter passwords, 6.76 seconds to check all the 2 letter passwords, 4570 seconds for the 4 letter passwords, and by the time we got to 8 letter passwords, it would take about 2*10**9 seconds (24169 days, over 66 years).
For passwords allowing lower case letters, numbers, and special symbols, this goes up considerably. Studies over the years have consistently indicated that key selection by those without a knowledge of protection is very poor. In a recent study [highland86] , 21% of the users on a computer system had 1 character passwords, with up to 85% having passwords of 1/2 the maximum allowable length, and 92% having passwords of 4 characters or less. These results are quite typical, and dramatically demonstrate that 92% of all passwords could be guessed on a typical system in just over an hour.
Several suggestions for getting unpredictable uniform random numbers include the use of low order bits of Geiger counter counts, the use of the time between entries at a keyboard, low order bits of the amount of light in a room as measured by a light sensitive diode, noisy diode output, the last digit of the first phone number on a given page of a telephone book, and digits from transcendental numbers such as Pi.
A credential is typically a document that introduces one party to another by referencing a commonly known trusted party. For example, when credit is applied for, references are usualy requested. The credit of the references is checked and they are contacted to determine the creditworthiness of the applicant. Credit cards are often used to credential an individual to attain further credit cards. A driver's license is a form of credential, as is a passport.
Electronic credentials are designed to allow the credence of a claim to be verified electronically. Although no purely electronic credentialing systems are in widespread use at this time, many such systems are being integrated into the smart-card systems in widespread use in Europe. A smart-card is simply a credit-card shaped computer that performs cryptographic functions and stores secret information. When used in conjunction with other devices and systems, it allows a wide variety of cryptographic applications to be performed with relative ease of use to the consumer.
Electronic signatures, like their physical counterparts, are a means of providing a legally binding transaction between two or more parties. To be as useful as a physical signature, electronic signatures must be at least as hard to forge, at least as easy to use, and accepted in a court of law as binding upon all parties to the transaction.
The need for these electronic signatures is especially accute in business dealings wherein the parties to a contract are not in the same physical vacinity. For example, in an international sale of an airplane, signatures are typically required from two bankers, two companies, two insurance agencies, two attorneys, two governments, and often several other parties. The contracts are hundreds of pages long, and signatures must be attained within a relatively shoert period of time on the same physical document. Faxcimily signatures are not legally binding in all jurisdictions, and the sheer length of a document precludes all parties reading the current copy as they meet at the table to affix signatures. Under current law, all parties must meet in one location in order to complete the transaction. In a transatlantic sale, $100,000 in costs can easily be encurred in such a meeting.
An effort in Europe is currently underway to replace physical signatures with electronic signatures based on the RSA cryptosystem. If this effort succeeds, it will allow many millions of dollars to be saved, and launch the era of digital signatures into full scale motion. It will also create a large savings for those who use the system, and therefore act to force others to participate in order to remain competitive.
There are patents under force throughout the world today to allow electronic information to replace cash money for financial transactions between individuals. Such a system involves using cryptography to keep the assets of nations in electronic form. Clearly the ability to forge such a system would allow national economies to be destroyed in an instant. The pressure for integrity in such a system is staggaring.
Thresholding systems are systems designed to allow use only if a minimal number of parties agree to said use. For example, in a nuclear arms situation, you might want a system wherein three out of five members of the joint chiefs of staff agree. In a banking situation, a safe might only be openned if 4 out of the authorized 23 people allowed to open the safe were present. Such systems preclude a single individual acting alone, while allowing many of the parties to a transaction to be absent without the transaction being halted.
Most threshold systems are based on encryption with keys which are distributed in parts. The most common technique for partitioning a key into parts is to form the key as the solution to N equations in N unknowns. If N independent equations are known, the key can be determined by solving the simultaneous equations. If less than N equations are known, the key can be any value since there is still an independent variable in the equations. Any number can be chosen for N and equations can be held by separate individuals. The same general concept can be used to form arbitrary combinations of key requirements by forming ORs and ANDs of encryptions using different sets of keys for different combinations of key holders [Shamir79] . The major difficulties with such a system lie in the key distribution problem and the large number of keys necessary to achieve arbitrary key holder combinations.
Shannon has shown us that given enough reuse of a key, it can eventually be determined. It is thus common practice to regularly change keys to limit the exposure due to successful attack on any given key. A common misconception is that changing a key much more often than the average time required to break the cryptosystem, provides an increased margin of safety.
If we assume the key is chosen at random, and that the attacker can check a given percentage of the keys before a key change is made, it is only a matter of time before one of the keys checked by the attacker happens to correspond to one of the random keys. If the attacker chooses keys to attack at random without replacement over the period of key usage, and begins again at the beginning of each period, it is 50% likely that a currently valid key will be found by the time required to try 50% of the total number of keys, regardless of key changes. Thus if a PC could try all the DES keys in 10 years, it would be 50% likely that a successful attack could be launched in 5 years of effort. The real benefit of key changes is that the time over which a broken key is useful is limited to the time till the next key change. This is called limiting the exposure from a stolen key.
Historically, cryptography has been carried out through the use of cryptographic devices. The use of these devices derives from the difficulty in performing cryptographic transforms manually, the severe nature of errors that result from the lack of redundancy in many cryptographic systems, and the need to make the breaking of codes computationally difficult.
In WWII, the ENIGMA machine was used by the Germans to encode messages, and one of the first computers ever built was the BOMB, which was designed to break ENIGMA cryptograms. Modern supercomputers are used primarily by the NSA to achieve the computational advantage necessary to break many modern cryptosystems. The CRAY could be easily used to break most password enciphering systems, RSA systems with keys of length under about 80 (circa 1986) are seriously threatened by the CRAY, and even the DES can be attacked by using special purpose computer hardware. [Diffie77] Many devices have emerged in the marketplace for the use of cryptography to encrypt transmissions, act as cryptographic keys for authentication of identification, protect so called debit cards and smart cards, and implementing electronic cash money systems.