In cryptography, a timing attack is a side-channel attack in which the attacker attempts to compromise a cryptosystem by analyzing the time taken to execute cryptographic algorithms. Every logical operation in a computer takes time to execute, and the time can differ based on the input; with precise measurements of the time for each operation, an attacker can work backwards to the input. Finding secrets through timing information may be significantly easier than using cryptanalysis of known plaintext, ciphertext pairs. Sometimes timing information is combined with cryptanalysis to increase the rate of information leakage.[1]
Information can leak from a system through measurement of the time it takes to respond to certain queries. How much this information can help an attacker depends on many variables: cryptographic system design, the CPU running the system, the algorithms used, assorted implementation details, timing attack countermeasures, the accuracy of the timing measurements, etc. Timing attacks can be applied to any algorithm that has data-dependent timing variation. Removing timing-dependencies is difficult in some algorithms that use low-level operations that frequently exhibit varied execution time.
Timing attacks are often overlooked in the design phase because they are so dependent on the implementation and can be introduced unintentionally with compiler optimizations. Avoidance of timing attacks involves design of constant-time functions and careful testing of the final executable code.[1]
Avoidance
Many cryptographic algorithms can be implemented (or masked by a proxy) in a way that reduces or eliminates data-dependent timing information, a constant-time algorithm. Consider an implementation in which every call to a subroutine always returns in exactly x seconds, where x is the maximum time it ever takes to execute that routine on every possible authorized input. In such an implementation, the timing of the algorithm is less likely to leak information about the data supplied to that invocation.[2] The downside of this approach is that the time used for all executions becomes that of the worst-case performance of the function.
The data-dependency of timing may stem from one of the following:[1]
- Non-local memory access, as the CPU may cache the data. Software run on a CPU with a data cache will exhibit data-dependent timing variations as a result of memory looks into the cache.
- Conditional jumps. Modern CPUs try to speculatively execute past jumps by guessing. Guessing wrong (not uncommon with essentially random secret data) entails a measurable large delay as the CPU tries to backtrack. This requires writing branch-free code.
- Some "complicated" mathematical operations, depending on the actual CPU hardware:
- Integer division is almost always non-constant time. The CPU uses a microcode loop that uses a different code path when either the divisor or the dividend is small.
- CPUs without a barrel shifter runs shifts and rotations in a loop, one position at a time. As a result, the amount to shift must not be secret.
- Older CPUs run multiplications in a way similar to division.
Examples
The execution time for the square-and-multiply algorithm used in modular exponentiation depends linearly on the number of '1' bits in the key. While the number of '1' bits alone is not nearly enough information to make finding the key easy, repeated executions with the same key and different inputs can be used to perform statistical correlation analysis of timing information to recover the key completely, even by a passive attacker. Observed timing measurements often include noise (from such sources as network latency, or disk drive access differences from access to access, and the error correction techniques used to recover from transmission errors). Nevertheless, timing attacks are practical against a number of encryption algorithms, including RSA, ElGamal, and the Digital Signature Algorithm.
In 2003, Boneh and Brumley demonstrated a practical network-based timing attack on SSL-enabled web servers, based on a different vulnerability having to do with the use of RSA with Chinese remainder theorem optimizations. The actual network distance was small in their experiments, but the attack successfully recovered a server private key in a matter of hours. This demonstration led to the widespread deployment and use of blinding techniques in SSL implementations. In this context, blinding is intended to remove correlations between key and encryption time.[3]
Some versions of Unix use a relatively expensive implementation of the crypt library function for hashing an 8-character password into an 11-character string. On older hardware, this computation took a deliberately and measurably long time: as much as two or three seconds in some cases. The login program in early versions of Unix executed the crypt function only when the login name was recognized by the system. This leaked information through timing about the validity of the login name, even when the password was incorrect. An attacker could exploit such leaks by first applying brute-force to produce a list of login names known to be valid, then attempt to gain access by combining only these names with a large set of passwords known to be frequently used. Without any information on the validity of login names the time needed to execute such an approach would increase by orders of magnitude, effectively rendering it useless. Later versions of Unix have fixed this leak by always executing the crypt function, regardless of login name validity.
Two otherwise securely isolated processes running on a single system with either cache memory or virtual memory can communicate by deliberately causing page faults and/or cache misses in one process, then monitoring the resulting changes in access times from the other. Likewise, if an application is trusted, but its paging/caching is affected by branching logic, it may be possible for a second application to determine the values of the data compared to the branch condition by monitoring access time changes; in extreme examples, this can allow recovery of cryptographic key bits.[4][5]
The 2017 Meltdown and Spectre attacks which forced CPU manufacturers (including Intel, AMD, ARM, and IBM) to redesign their CPUs both rely on timing attacks.[6] As of early 2018, almost every computer system in the world is affected by Spectre.[7][8][9]
Timing attacks are difficult to prevent and can often be used to extend other attacks. For example, in 2018, an old attack on RSA was rediscovered in a timing side-channel variant, two decades after the original bug.[10]
Algorithm
The following C code demonstrates a typical insecure string comparison which stops testing as soon as a character doesn't match. For example, when comparing "ABCDE" with "ABxDE" it will return after 3 loop iterations:
bool insecureStringCompare(const void *a, const void *b, size_t length) {
const char *ca = a, *cb = b;
for (size_t i = 0; i < length; i++)
if (ca[i] != cb[i])
return false;
return true;
}
By comparison, the following version runs in constant-time by testing all characters and using a bitwise operation to accumulate the result:
bool constantTimeStringCompare(const void *a, const void *b, size_t length) {
const char *ca = a, *cb = b;
bool result = true;
for (size_t i = 0; i < length; i++)
result &= ca[i] == cb[i];
return result;
}
In the world of C library functions, the first function is analogous to memcmp()
, while the latter is analogous to NetBSD's consttime_memequal()
or [11] OpenBSD's timingsafe_bcmp()
and timingsafe_memcmp
. On other systems, the comparison function from cryptographic libraries like OpenSSL and libsodium can be used.
Notes
Timing attacks are easier to mount if the adversary knows the internals of the hardware implementation, and even more so, the cryptographic system in use. Since cryptographic security should never depend on the obscurity of either (see security through obscurity, specifically both Shannon's Maxim and Kerckhoffs's principle), resistance to timing attacks should not either. If nothing else, an exemplar can be purchased and reverse engineered. Timing attacks and other side-channel attacks may also be useful in identifying, or possibly reverse-engineering, a cryptographic algorithm used by some device.
References
- 1 2 3 "Constant-Time Crypto". BearSSL. Retrieved 10 January 2017.
- ↑ "A beginner's guide to constant-time cryptography". Retrieved 9 May 2021.
- ↑ David Brumley and Dan Boneh. Remote timing attacks are practical. USENIX Security Symposium, August 2003.
- ↑ See Percival, Colin, Cache Missing for Fun and Profit, 2005.
- ↑ Bernstein, Daniel J., Cache-timing attacks on AES, 2005.
- ↑ Horn, Jann (3 January 2018). "Reading privileged memory with a side-channel". googleprojectzero.blogspot.com.
- ↑ "Spectre systems FAQ". Meltdown and Spectre.
- ↑ "Security flaws put virtually all phones, computers at risk". Reuters. 4 January 2018.
- ↑ "Potential Impact on Processors in the POWER Family". IBM PSIRT Blog. 14 May 2019.
- ↑ Kario, Hubert. "The Marvin Attack". people.redhat.com. Retrieved 19 December 2023.
- ↑ "Consttime_memequal".
Further reading
- Paul C. Kocher. Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems. CRYPTO 1996: 104–113
- Lipton, Richard; Naughton, Jeffrey F. (March 1993). "Clocked adversaries for hashing". Algorithmica. 9 (3): 239–252. doi:10.1007/BF01190898. S2CID 19163221.
- Reparaz, Oscar; Balasch, Josep; Verbauwhede, Ingrid (March 2017). "Dude, is my code constant time?" (PDF). Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017. pp. 1697–1702. doi:10.23919/DATE.2017.7927267. ISBN 978-3-9815370-8-6. S2CID 35428223. Describes dudect, a simple program that times a piece of code on different data.