Abstract
Device-independent cryptography goes beyond conventional quantum cryptography by providing security that holds independently of the quality of the underlying physical devices. Device-independent protocols are based on the quantum phenomena of non-locality and the violation of Bell inequalities. This high level of security could so far only be established under conditions which are not achievable experimentally. Here we present a property of entropy, termed “entropy accumulation”, which asserts that the total amount of entropy of a large system is the sum of its parts. We use this property to prove the security of cryptographic protocols, including device-independent quantum key distribution, while achieving essentially optimal parameters. Recent experimental progress, which enabled loophole-free Bell tests, suggests that the achieved parameters are technologically accessible. Our work hence provides the theoretical groundwork for experimental demonstrations of device-independent cryptography.
The security of DIQKD is difficult to prove, as one needs to take into account every possible attack strategy. Here, the authors develop a method to determine the entropy of a system as the sum of the entropies of its parts. Applied to DIQKD, this implies that it suffices to consider i.i.d. attacks.
Introduction
Device-independent (DI) quantum cryptographic protocols achieve an unprecedented level of security—with guarantees that hold (almost) irrespective of the quality, or trustworthiness, of the physical devices used to implement them1. The most challenging cryptographic task in which DI security has been considered is quantum key distribution (QKD); we will use this task as an example throughout the manuscript. In DIQKD, the goal of the honest parties, called Alice and Bob, is to create a shared key, unknown to everybody else but them. To execute the protocol, they hold a device consisting of two parts: each part belongs to one of the parties and is kept in their laboratories. Ideally, the device performs measurements on some entangled quantum states it contains.
In real life, the manufacturer of the device, called Eve, can have limited technological abilities (and hence cannot guarantee that the device’s actions are exact and non-faulty) or even be malicious. The device itself is far too complex for Alice and Bob to open and assess whether it works as Eve alleges. Alice and Bob must therefore treat the device as a black box with which they can only interact according to the protocol. The protocol must allow them to test the possibly faulty or malicious device and decide whether using it to create their keys poses any security risk. The protocol guarantees that by interacting with the device according to the specified steps, the honest parties will either abort, if they detect a fault, or produce identical and secret keys (with high probability).
Adopting the DI approach is not only crucial for the paranoid cryptographers; even the most skilled experimentalist will recognise that a fully characterised, stable at all times, large-scale quantum device that implements a QKD protocol is extremely hard to build. Indeed, implementations of QKD protocols have been attacked by exploiting imperfections of the devices2–5. Instead of trying to come up with a “patch” each time an imperfection in the device is detected, DI protocols allow us to break the cycle of attacks and countermeasures.
The most important (in fact necessary) ingredient, which forms the basis of all DI protocols, is a “test for quantumness” based on the violation of a Bell inequality6–9. A Bell inequality10,11 can be thought of as a game played by the honest parties using the device they share (Fig. 1). Different devices lead to different winning probabilities when playing the game. The game has a special “feature”—there exists a quantum device which achieves a winning probability ωq greater than all classical, local, devices. Hence, if the honest parties observe that their device wins the game with probability ωq they conclude that it must be non-local11. A recent sequence of breakthrough experiments have verified the quantum advantage in such “Bell games” in a loophole-free way12–14 (in particular, this means that the experiments were executed without making assumptions that could otherwise be exploited by Eve to compromise the security of a cryptographic protocol).
DI security relies on the following deep but well-established facts. High winning probability in a Bell game not only implies that the measured system is non-local, but more importantly that the kind of non-local correlations it exhibits cannot be shared: the higher the winning probability, the less information any eavesdropper can have about the devices’ outcomes. The tradeoff between winning probability and secret randomness, or entropy, can be made quantitative15,16.
The amount of entropy, or secrecy, generated in a single round of the protocol can therefore be calculated from the winning probability in a single game. The major challenge, however, consists in establishing that entropy accumulates additively throughout the multiple rounds of the protocol and use it to bound the total secret randomness produced by the device.
A commonly used assumption17–21 to simplify this task is that the device held by the honest parties makes the same measurements on identical and independent quantum states in every round i ∈ {1, …, n} of the protocol. This implies that the device is initialised in some (unknown) state of the form σ⊗n, i.e., an independent and identically distributed (i.i.d.) state, and that the measurements have a similar structure. In that case, the total entropy created during the protocol can be easily related to the sum of the entropies generated in each round separately (as further explained below).
Unfortunately, although quite convenient for the analysis, the i.i.d. assumption cannot be justified a priori. When considering device-dependent protocols, such as the BB84 protocol22, de Finetti theorems23,24 can often be applied to reduce the task of proving the security in the most general case to that of proving security with the i.i.d. assumption. This approach was unsuccessful in the DI scenario, where known de Finetti theorems23–26 do not apply. Hence, one cannot simply reduce a general security statement to the one proven under the i.i.d. assumption.
Without this assumption, however, very little is known about the structure of the untrusted device and hence also about its output. As a consequence, previous DIQKD security proofs had to address directly the most general case27–29. This led to security statements which are of limited relevance for practical experimental implementations; they are applicable only in an unrealistic regime of parameters, e.g., small amount of tolerable noise and large number of signals.
The work presented here resolves this situation. First, we provide a general information-theoretic tool that quantifies the amount of entropy accumulated during sequential processes which do not necessarily behave identically and independently in each step. We call this result the “Entropy Accumulation Theorem” (EAT). We then show how it can be applied to essentially reduce the problem of proving DI security in the most general case to that of the i.i.d. case. This allows us to establish simple and modular security proofs for DIQKD that yield tight key rates. Our quantitative results imply that the first proofs of principle experiments implementing a DIQKD protocol are within reach with today’s state-of-the-art technology. Aside from its application to security proofs, the EAT can be used in other scenarios in quantum information such as the analysis of quantum random access codes.
Results
In the following, we start by explaining the main steps in a security proof of DIQKD under the i.i.d. assumption using well-established techniques. We then present the EAT and show how it can be used to extend the proof and achieve full security (i.e., without assuming an i.i.d. behaviour of the device).
Security under the independent and identically distributed device assumption
The central task when proving the security of cryptographic protocols consists in bounding the information that an adversary, called Eve, may obtain about certain values generated by the protocol, which are supposed to be secret. For QKD, the appropriate measure of knowledge, or rather uncertainty, is given by the smooth conditional min-entropy30 , where K is the raw data obtained by the honest parties, E the quantum system held by Eve, and ε a parameter describing the security of the protocol. The quantity determines the maximal length of the secret key that can be created by the protocol. Hence, proving the security amounts to establishing a lower bound on . Evaluating can be a daunting task, as the adversary’s system E is out of our control; in particular, it can have arbitrary dimension and share quantum correlations with the users’ devices.
Most protocols consist of a basic building block, or “round”, which is repeated a large number, n, of times; in each round i, the classical data Ki is generated. The structure of a DIQKD protocol is shown in Box 1. The i.i.d. assumption means that the raw key can be treated as a sequence of i.i.d. random variables Ki. That is, all the Ki are identical and independent of one another. The eavesdropper has side information Ei about each Ki. In this case, the total conditional min-entropy can be directly related to the single-round conditional von Neumann entropy using the quantum asymptotic equipartition property31 (AEP), which asserts that
1 |
where cε depends only on ε (see the Methods section).
To get a bound on , we therefore need to analyse the secrecy, , resulting from a single round of the protocol. Depending on the considered scenario, a lower bound on can be found using different techniques. For discrete- and continuous-variable QKD, for example, one can use the entropic uncertainty relations32,33. When dealing with DIQKD, a quantum advantage in a Bell game implies a lower-bound on as discussed above.
The Clauser–Horne–Shimony–Holt (CHSH) game34 (presented in Fig. 1) forms the basis for most DIQKD protocols. For this game, a tight bound on the secrecy as a function of the winning probability in the game was derived19. The bound implies that for any quantum state that wins the CHSH game with probability ω, the entropy evaluated on the state of the system after the game has been played is at least
2 |
where h(⋅) is the binary entropy function. This relation is shown in Fig. 2.
To compute the bound on , Alice and Bob need to collect the statistics they observe while running the protocol and estimate the winning probability ω appearing in Eq. (2); assuming the i.i.d. structure this is easily done using Hoeffding’s inequality.
The conclusion of this section is the following. The i.i.d. assumption plays a crucial role in the above line of proof: it allows us to reduce the problem of calculating the total secrecy of the raw key created by the device to that of bounding the secrecy produced in one round. Instead of dealing with large-scale quantum systems, we are only required to understand the physics of small systems associated with just one round (as in Eq. (2)). The AEP appearing as Eq. (1) does the rest.
Box 1 Device-independent quantum key distribution protocol (simplified example)
Given: A device for Alice and Bob that can play the chosen Bell game repeatedly
For every round i ∈ [n] do Steps 2–4:
Alice and Bob choose Xi,Yi at random.
They input Xi,Yi to the device and record the outputs Ai, Bi.
Alice sets Ki = Ai
Parameter estimation: Alice and Bob estimate the average winning probability in the game from the observed data. If it is below the expected winning probability, ωT, they abort.
Classical post processing: Alice and Bob apply an error correction protocol and a privacy amplification protocol (both classical) on their raw keys K and B.
Extending to full security
Assuming the device behaves in an i.i.d. way goes completely against the DI setting by imposing a severe and even unrealistic restrictions on the implementation of the device. In particular, the assumption implies that the device does not include any, classical or quantum, internal memory (i.e., its actions in one round cannot depend on the previous rounds) and cannot display time-dependent behaviour.
Our main contribution can be phrased as follows.
Theorem (Security of DIQKD, informal): Security of DIQKD in the most general case follows from security under the i.i.d. assumption. Moreover, the dependence of the key rate on the number of exchanged signals, n, is the same as the one in the i.i.d. case, up to terms that scale like . The key rates are plotted below.
We now explain the above theorem and how it is derived in more detail. A general device is described by an (unknown) tripartite state , where the bipartite quantum state is shared between Alice and Bob and ρE belongs to Eve, together with the measurements applied to when the device is used. No additional structure is assumed (see Fig. 3).
As mentioned above, the standard DIQKD protocol proceeds in rounds (recall Box 1): Alice and Bob use their components in the first round of the protocol and only then proceed to the second round, etc. We leverage this structure to bound the amount of entropy produced during a complete execution of the protocol.
To do so, we prove a generalisation of the AEP given in Eq. (1) to a scenario in which, instead of the raw key being produced by an i.i.d. process, its parts Ki are produced one after the other. In this case, each Ki may depend not only on i-th round of the protocol but also on everything that happened in previous rounds (but not on the subsequent ones). We explain our tool, the EAT, in the following.
The entropy accumulation theorem (EAT)
We describe here a simplified and informal version of the EAT, sufficient to understand how it can be used to prove the security of DI protocols; for the most general statements see the Methods section.
We consider processes which can be described by a sequence of n maps , called “EAT channels”, as shown in Fig. 4. Each outputs two systems, Oi, which describes the information that should be kept secret, and Si, describing some side information leaked by the map, together with a “memory” system Ri, which is passed on as an input to the next map . The systems describe the side information created during the process. A further quantum system, denoted by E, represents additional side information correlated to the initial state in the beginning of the considered process. The systems are then the ones in which entropy is accumulated conditioned on the side information and E.
To bound the entropy of , we take into account global statistical properties. These are inferred by tests carried out by the protocol on a small sample of the generated outputs. To incorporate such statistical information, we consider for each round an additional classical value Ci computed from Oi and Si. Additionally, in each step of the process, the previous outcomes are independent of future information Si given all the past side information . By choosing Oi and Si properly, this condition can be satisfied by sequential protocols such as DIQKD.
The EAT relates the total amount of entropy to the entropy accumulated in one step of the process. The latter is quantified by the minimal, or worst-case, von Neumann entropy produced by the maps when acting on an input state that reproduces the correct statistics on Ci, i.e., states that satisfy where is the empirical statistics, or frequency distribution, on defined by .
To state the explicit result, we define a “min-tradeoff function”, fmin, from the set of probability distributions over to the real numbers; fmin should be chosen as a convex differentiable function which is bounded above by the worst-case entropy just described:
3 |
An event Ω is defined by a subset of and we write for the probability of the event Ω and
4 |
for the state conditioned on Ω. We further define a set over the set of frequencies such that for all , if and only if .
Theorem (EAT, informal): For any EAT channels, an event Ω such that is a convex set, and a convex min-tradeoff function for which for any ,
5 |
where the conditional smooth min-entropy is evaluated on ρ|Ω and v depends on the values , and the maximal dimension of the systems Oi.
Equation (5) asserts that, to first order in n, the total conditional smooth min-entropy is at least n times the value of the min-tradeoff function, evaluated on the empirical statistics observed during the protocol (and hence linear in the number of rounds). In the special case where the EAT channels are independent and identical, the EAT is reduced to the quantum AEP; Eq. (5) is thus a generalisation of Eq. (1).
DIQKD security via the EAT
To gain intuition on how the EAT can be applied to DIQKD, note the following. Define the maps to describe the joint behaviour of the honest parties and their respective uncharacterised device while playing a single round of a Bell game such as the CHSH game. Let Ω be the event of the protocol not aborting or a closely related event, e.g., the event that the fraction of CHSH games won is above some threshold ωT. The state for which the smooth min-entropy is evaluated is ρ|Ω, i.e., the state at the end of the protocol conditioned on not aborting. This implies, in particular, a bound on .
Furthermore, the condition on the min-tradeoff function stated in Eq. (3) corresponds to the requirement that the distribution of Ci equals , which ensures that the entropy in Eq. (3) is evaluated on states that can be used to win the CHSH game with probability ωT. Thus, in order to devise an appropriate min-tradeoff function, we can use the relation appearing in Eq. (2); the exact details are given in the Methods section. This results in a tight bound on the amount of entropy created in each step of the protocol. In this sense, we reduce the problem of proving the security of the whole protocol to that of a single round.
Using the EAT we get a bound on which, to first order in n, coincides with the one derived under the i.i.d. assumption and is thus optimal. The final key rate (where is the length of a key) produced in a DIQKD protocol depends on this amount of entropy and the amount of information leaked during standard classical post-processing steps. We plot the results for specific choices of parameters in Fig. 5.
To calculate the key rate, one must have some honest implementation of the protocol in mind; this is given by what the experimentalists think (or guess) is happening in their experiment when an adversary is not present. It does not, in any way, restrict the actions of the adversary or the types of imperfections in the device. We consider the following honest implementation, but the analysis can be adapted to any other implementation of interest.
In the realisation of the device, in each round, Alice and Bob share the two-qubit Werner state resulting from a depolarisation noise acting on the maximally entangled state . In every round, the measurements for Xi, Yi ∈ {0, 1} are as described in Fig. 1 and for Yi = 2 Bob’s measurement is σz. The winning probability in the CHSH game (restricted to Xi, Yi ∈ {0, 1}) using these measurements on is . The quantum bit error rate for the above state and measurements is given by Q = ν/2.
The key rate r is plotted in Fig. 5. For n = 1015, the curve essentially coincides with the rate achieved in the asymptotic i.i.d. case19. Since the latter was shown to be optimal19, it provides an upper bound on the key rate and the amount of tolerable noise. Hence, for large enough n our rates become optimal and the protocol can tolerate up to the maximal error rate Q = 7.1%. For comparison, the previously established explicit rates28 are well below the lowest curve presented in Fig. 5, even when the number of signals goes to infinity, with a maximal noise tolerance of 1.6%. Moreover, our key rates are comparable to those achieved in device-dependent QKD protocols35.
Discussion
The information theoretic tool, the EAT, reveals a novel property of entropy: the operationally relevant total uncertainty about an n-partite system created in a sequential process corresponds to the sum of the entropies of its parts, even without an independence assumption.
Using the EAT, we show that practical and realistic protocols can be used to achieve the unprecedented level of DI security. The next major challenge in experimental implementations is a field demonstration of a DIQKD protocol. This would provide the strongest cryptographic experiment ever realised. The work presented here provides the theoretical groundwork for such experiments. Our quantitative results imply that the first proofs of principle experiments, with small distances and small rates, are within reach with today’s state-of-the-art technology, which recently enabled the violation of Bell inequalities in a loophole-free way.
Methods
We state here the main theorems of our work and sketch the proofs. Using the explicit expressions given below, one can reproduce the key rates presented in Fig. 5.
The formal statement and proof idea of the EAT
In this section, we are interested in the general question of whether entropy accumulates, in the sense that the operationally relevant total uncertainty about an n-partite system corresponds to the sum of the entropies of its parts Oi. The AEP, given in Eq. (1), implies that this is indeed the case to first order in n—under the assumption that the parts Oi are identical and independent of each other. Our result shows that entropy accumulation occurs for more general processes, i.e., without an independence assumption, provided one quantifies the uncertainty about the individual systems Oi by the von Neumann entropy of a suitably chosen state.
The type of processes that we consider are those that can be described by a sequence of channels, as illustrated in Fig. 4. Such channels are called EAT channels and are formally defined as follows.
Definition 1 (EAT channels): EAT channels , for i ∈ [n], are CPTP (completely positive trace preserving) maps such that for all i ∈ [n]:
Ci are finite-dimensional classical systems (random variables). Oi, Si, and Ri are quantum registers. The dimension of Oi is
For any input state , where R′ is a register isomorphic to Ri−1, the output state has the property that the classical value Ci can be measured from the system without changing the state.
For any initial state , the final state fulfils the Markov chain condition for each i ∈ [n].
In the above definition, if and only if their conditional mutual information is 0, .
Next, one should find an adequate way to quantify the amount of entropy which is accumulated in a single step of the process, i.e., in an application of just one channel. To do so, let p be a probability distribution over , where denotes the common alphabet of , and R′ be a system isomorphic to Ri−1. We define the set of states
6 |
where denotes the probability distribution over with the probabilities given by .
The tradeoff functions for the EAT channels are defined below.
Definition 2 (Tradeoff functions): A real function is called a min- or max-tradeoff function for if it satisfies
7 |
or
8 |
respectively, and if it is convex or concave, respectively. If the set Σi(p) is empty, then the infimum and supremum are by definition equal to ∞ and −∞, respectively, so that the conditions are trivial.
To get some intuition as to why the above definition is the “correct” one, consider the following classical example. Each EAT channel outputs a single bit Oi without any side information Si about it; the system E is empty as well. Every bit can depend on the ones produced previously. We would like to extract randomness out of the sequence ; for this we should find a lower bound on .
We ask the following question—given the randomness of O1 which is already accounted for, how much randomness does O2 contribute? One possible guess is the conditional von Neumann entropy . If, however, O1 is uniform while O2 is fixed when O1 = 0 and uniform otherwise, then is too optimistic; the amount of extractable randomness is quantified by the smooth min-entropy, which depends on the most probable value of O1O2, and not by an average quantity as the von Neumann entropy.
Another possible guess is a worst-case version of the min-entropy . This, however, is too pessimistic; when the Oi’s are independent of each other, the extractable amount of randomness behaves like the von Neumann entropy in first order, and not like the min-entropy.
We therefore choose an intermediate quantity—. That is, this quantity is the von Neumann entropy of O2, evaluated for the worst-case state in the beginning of the second step of the process. The min-tradeoff function defined above is the quantum analogue version of this.
Informally, the min-tradeoff function can be understood as the amount of entropy available from a single round, conditioned on the outputs of the previous rounds. Since we condition on the previous rounds, one can think of the randomness of the current round as independent from past events. Intuitively, this suggests that, by appropriately generalising the proof of the AEP, one can argue that the entropy that is contributed by this independent randomness in each round accumulates.
The formal statement of the EAT is as follows.
Theorem 3 (EAT, formal): Let for i ∈ [n] be EAT channels, ρ be the final state, Ω an event defined over , pΩ the probability of Ω in ρ, and ρ|Ω the final state conditioned on Ω. Let εs ∈ (0, 1).
For fmin, a min-tradeoff function for , convex, and any such that for any for which ,
9 |
where
10 |
Similarly, for fmax a max-tradeoff function and such that for any for which ,
11 |
The two most important properties of the above statement are that the first-order term is linear in n and that t is the von Neumann entropy of a suitable state (as explained above). This implies that the EAT is tight to first order in n.
We remark that the Markov chain conditions are important, in the sense that dropping them completely would render the statement invalid.
We now give a rough proof sketch of the case; the bound for follows from an almost identical argument. The proof has a similar structure to that of the quantum AEP31, which we can retrieve as a special case. The proof relies heavily on the “sandwiched” Rényi entropies36,37, which is a family of entropies that we will denote here by Hα, where α is a real parameter ranging from to ∞, and which corresponds to the max-entropy at , to the von Neumann entropy when α = 1, and to the min-entropy when α = ∞.
The basic idea is to first lower bound the term by Hα using the following general bound31,38–40:
12 |
Then, we lower bound the Hα term by the von Neumann entropy using the following31,39:
13 |
Now, we could simply chain these two inequalities and apply them to . However, this would result in a very poor bound due to the term in Eq. (13), which in our case would be O(n2). To get the bound we want, we need to reduce this term to O(n); choosing would then produce a bound with the right scaling.
The trick we use to achieve this is to decompose into n terms of constant size before applying Eq. (13). In the quantum AEP31, this step is immediate since the state is i.i.d. Here, we must use more sophisticated techniques. Specifically, we use the following chain rule for the sandwiched Rényi entropy to decompose into n terms:
Theorem 4: Let be a density operator on R ⊗ A1 ⊗ B1 and be a CPTP map. Assuming that satisfies the Markov condition A1↔B1↔B2,we have
14 |
where the infimum ranges over density operators on R ⊗ A1 ⊗ B1. Moreover, if is pure, then we can optimise over pure states .
Implementing this proof strategy then yields the following chain of inequalities:
15 |
However, this does not yet take into account the sampling over the Ci subsystems. To do this, we tweak the EAT channels to output two extra systems Di and which contain an amount of entropy that depends on the value of Ci observed. To define this, let g be an affine lower bound on fmin, let be the smallest interval that contains the range of g, and set . Then, we define as
16 |
where τ(c) is a mixture between a maximally entangled state and a fully mixed state such that the marginal on is uniform, and such that (here δc stands for the distribution with all the weight on element c). To ensure that this is possible, we need to choose large enough, and it turns out that setting suffices. We can then define a new sequence of EAT channels by .
Armed with this, we apply the above argument to our new EAT channels. On the one hand, a more sophisticated version of Eq. (12) yields:
17 |
On the other hand, the argument from Eq. (15) can be used here to give
18 |
Combining these two bounds then yields the theorem.
We remark that some of the concepts used in this work generalise techniques proposed in the recent security proofs for DI cryptography29.
Entropy accumulation protocol
To analyse the key rates of the DIQKD protocol, we first find a lower bound on the amount of entropy accumulated during the run of the protocol, when the honest parties use their device to play the Bell games repeatedly. To this end, we consider the “entropy accumulation protocol” shown in Box 2. This protocol can be seen as the main building block of many DI cryptographic protocols.
The entropy accumulation protocol creates m blocks of bits, each of maximal length . Each block ends (with high probability) with a test round; this is a round in which Alice and Bob play the CHSH game with their device so that they can verify that the device acts as expected. The probability of each round to be a test round is γ. The rest of the rounds are generation rounds, in which Bob chooses a special input for his component of the device. In the end of the protocol, Alice and Bob check whether they had sufficiently many test rounds in which they won the CHSH game. If not, they abort.
We note that the protocol is complete, in the sense that there exists an honest implementation of it (possibly noisy) which does not abort with high probability. Denoting the completeness error, i.e., the probability that the protocol aborts for an honest implementation of the devices D, by , one can easily show using Hoeffding’s inequality that for an honest i.i.d. implementation .
Next, we show that the protocol is also sound. That is, for any device D, if the probability that the protocol does not abort is not too small, then the total amount of smooth min-entropy is sufficiently high.
The EAT can be used to bound the total amount of smooth min-entropy, , created when running the entropy accumulation protocol, given that it did not abort. Here n denotes the expected number of rounds of the protocol and εs is one of the security parameters (to be fixed later).
Below we use the following notation. For each block j ∈ [m], denotes the string that includes Alice’s outputs in block j (note that the length of this string is unknown, but it is at most ). , and are defined analogously. To use the EAT, we make the following choices of random variables:
19 |
20 |
21 |
22 |
The event Ω is the event of not aborting the protocol, as given in Step 11 in Box 2:
23 |
The EAT channels are chosen to be
24 |
where describes Steps 2–10 of block j in the entropy accumulation protocol (Box 2). These channels include both the actions made by Alice and Bob as well as the operations made by the device D in these steps. Note that the Device’s operations can always be described within the formalism of quantum mechanics, although we do not assume we know them. The registers Rj−1 and Rj hold the quantum state of the device in the beginning and the end of the j’th step of the protocol, respectively.
Lemma 5: The channels described above are EAT channels.
Proof. For the channels to be EAT channels, they need to fulfil the conditions given in Definition 1. We show that this is indeed the case. First, are classical registers with and . Second, is determined by the classical registers as shown in Box 2. Therefore, can be calculated without modifying the marginal on those registers. The third condition is also fulfilled since the inputs are chosen independently in each round and hence trivially holds.
To continue one should devise a min-tradeoff function. Let be the probability distribution describing . We remark that due to the structure of our EAT channels, it is sufficient to consider for which (otherwise the set Σ defined in Eq. (6) is an empty set).
The following lemma gives a lower bound on the von Neumann entropy of the outputs in a single block.
Lemma 6: Let be the expected length of a block and h the binary entropy function. Then,
25 |
where the entropy is evaluated on a state that wins the CHSH game, in the test round, with probability
26 |
Proof sketch. The amount of entropy accumulated in a single round in a block is given in Eq. (2) in the main text. To get the amount of entropy accumulated in a block, one can use the chain rule for the von Neumann entropy. The result is then
27 |
where the pre-factors (1 − γ)(i−1) are attributed to the fact that the entropy in each round is non-zero only if the round is part of the block, i.e., if a test round was not performed before the i’th round in the block, and ωi denotes the winning probability in the i’th round (given that a test was not performed before).
The value of each ωi is not fixed completely given ω*. However, by the operation of the EAT channels the following relation holds:
28 |
To conclude the proof, we thus need to minimise under the above constraint. Using standard techniques, e.g., Lagrange multipliers, one can see that the minimal value of this entropy is achieved for ωi = ω* for all i and the lemma follows.
The bound given in the above lemma can now be used to define the min-tradeoff function . As the derivative of the function plays a role in the final bound, we must make sure it is not too large at any point. This can be enforced by “cutting” the function at a chosen point and “gluing” it to a linear function starting at that point, as shown in Fig. 6. can be chosen depending on the other parameters such that the total amount of smooth min-entropy is maximal. Following this idea, the resulting min-tradeoff function is given by
29 |
30 |
Let εEA be the desired error probability of the entropy accumulation protocol. We can then use Theorem 3 to say that either the probability of the protocol aborting is greater than 1 − εEA or the following bound on the total smooth min-entropy holds:
31 |
where
32 |
To illustrate the behaviour of the entropy rate ηopt, we plot it as a function of the expected Bell violation ωexp in Fig. 7 for γ = 1 and smax = 1. For comparison, we also plot in Fig. 7 the asymptotic rate (n → ∞) under the assumption that the state of the device is an (unknown) i.i.d. state. In this case, the quantum AEP appearing in Eq. (1) implies that the optimal rate is the von Neumann entropy accumulated in one round of the protocol (as given in Eq. (2)). This rate, appearing as the dashed line in Fig. 7, is an upper bound on the entropy that can be accumulated. One can see that as the number of rounds in the protocol increases, our rate ηopt approaches this optimal rate.
For the calculations of the DIQKD rates later on, we choose . For this choice, the first-order term of ηopt is linear in n and a short calculation reveals that the second-order term scales, roughly, as .
Our DIQKD protocol, shown in Box 3, is based on the entropy accumulation protocol described above. In the first part of the protocol Alice and Bob use their devices to produce the raw data, similarly to what is done in the entropy accumulation protocol. The main difference is that Bob’s outputs always contains his measurement outcomes (instead of being set to in the generation rounds); to make the distinction explicit, we denote Bob’s outputs in the DIQKD protocol with a tilde, .
We now describe the three post-processing steps, error correction, parameter estimation, and privacy amplification, in more detail.
Box 2 Entropy accumulation protocol (based on the CHSH game)
Given:
D—device that can play the CHSH game repeatedly
—number of blocks
—maximal length of a block
γ ∈ (0, 1]—probability of a test round
ωexp—expected winning probability in the honest implementation
δest ∈ (0, 1)—width of the statistical confidence interval
For every block j ∈ [m] do Steps 2–10:
Set i = 0 and .
If :
Set i = i + 1.
Alice and Bob choose Ti ∈ {0, 1} at random such that .
If Ti = 1 Alice and Bob choose inputs Xi ∈ {0,1} and Yi ∈ {0,1}.
If Ti = 0 they choose inputs Xi ∈ {0, 1} and Yi = 2.
Alice and Bob use D with Xi,Yi and record their outputs as Ai, Bi.
If Ti = 0 Bob updates Bi to Bi = ⊥.
If Ti = 1 they set and .
Alice and Bob abort if .
Box 3 DIQKD protocol (based on the CHSH game)
Given:
D—device that can play the CHSH game repeatedly
—number of blocks
—maximal length of a block
γ ∈ (0, 1]—probability of a test round
ωexp—expected winning probability in the honest implementation
δest ∈ (0, 1)—width of the statistical confidence interval
EC—error correction protocol which leaks leakEC bits and has error probability εEC
PA—privacy amplification protocol with error probability εPA
For every block j ∈ [m] do Steps 2–8:
Set i = 0 and .
If i ≤ smax:
Set i = i + 1.
Alice and Bob choose Ti ∈ {0, 1} at random such that Pr(Ti = 1) = γ.
If Ti = 1 Alice and Bob choose inputs Xi ∈ {0,1} and Yi ∈ {0,1}.
If Ti = 0 they choose inputs Xi ∈ {0, 1} and Yi = 2.
Alice and Bob use D with Xi, Yi and record their outputs as Ai, .
Error correction: Alice and Bob apply the error correction protocol EC on the outputs and , communicating O in the process. If EC aborts they abort the protocol. Otherwise, they obtain raw keys denoted by KA and KB.
Parameter estimation: Using and KB, Bob sets for the blocks with a test round at round i and otherwise. He aborts if .
Privacy amplification: Alice and Bob apply the privacy amplification protocol PA on KA and KB to create their final keys and of length as defined in Eq. (9).
Error correction
Alice and Bob use an error correction protocol EC to obtain identical raw keys KA and KB from their bits , . In our analysis, we use a protocol, based on universal hashing, which minimises the amount of leakage to the adversary41,42. To implement this protocol, Alice chooses a hash function and sends the chosen function and the hashed value of her bits to Bob. We denote this classical communication by O. Bob uses O, together with his prior knowledge , to compute a guess for Alice’s bits . If EC fails to produce a good guess, Alice and Bob abort; in an honest implementation, this happens with probability at most . If Alice and Bob do not abort, then they hold raw keys and and KA = KB with probability at least 1 − εEC.
Due to the communication from Alice to Bob, leakEC bits of information are leaked to the adversary. The following guarantee holds for the described protocol42:
33 |
for and where is evaluated on the state in an honest implementation of the protocol. If a larger fraction of errors occur when running the actual DIQKD protocol (for instance due to adversarial interference) the error correction might not succeed, as Bob will not have a sufficient amount of information to obtain a good guess of Alice’s bits. If so, this will be detected with probability at least 1 − εEC and the protocol will abort. In an honest implementation of the device, Alice and Bob’s outputs in the generation rounds should be highly correlated in order to minimise the leakage of information.
Parameter estimation
After the error correction step, Bob has all of the relevant information to perform parameter estimation from his data alone, without any further communication with Alice. Using and KB, Bob sets for the blocks with a test round (which was done at round i of the block) and otherwise. He aborts if the fraction of successful test rounds is too low, that is, if .
As Bob does the estimation using his guess of Alice’s bits, the probability of aborting in this step in an honest implementation, , is bounded by .
Privacy amplification
Finally, Alice and Bob use a (quantum-proof) privacy amplification protocol PA (which takes some random seed S as input) to create their final keys and of length , which are close to ideal keys, i.e., uniformly random and independent of the adversary’s knowledge.
For simplicity, we use universal hashing43 as the privacy amplification protocol in the analysis below. Any other quantum-proof strong extractor, e.g., Trevisan’s extractor44, can be used for this task and the analysis can be easily adapted.
The secrecy of the final key depends only on the privacy amplification protocol used and the value of , evaluated on the state at the end of the protocol, conditioned on not aborting. For universal hashing, for every εPA,εs∈(0, 1), a secure key of maximal length
34 |
is produced with probability at least 1 − εPA − εs.
Correctness, secrecy, and overall security of a DIQKD protocol are defined as follows45:
Definition 7 (Correctness): A DIQKD protocol is said to be εcorr-correct, when implemented using a device D, if Alice and Bob’s keys, and respectively, are identical with probability at least 1 − εcorr. That is, .
Definition 8 (Secrecy): A DIQKD protocol is said to be εsec-secret, when implemented using a device D, if for a key of length l, , where E is a quantum register that may initially be correlated with D.
εsec in the above definition can be understood as the probability that some non-trivial information leaks to the adversary45. If a protocol is εcorr-correct and εsec-secret (for a given D), then it is -correct-and-secret for any .
Definition 9 (Security): A DIQKD protocol is said to be -secure if:
(Soundness) For any implementation of the device D it is -correct-and-secret.
(Completeness) There exists an honest implementation of the device D such that the protocol does not abort with probability greater than .
Below we show that the following theorem holds.
Theorem 10: The DIQKD protocol described above is -secure, with , , and
35 |
where for any εt ∈ (0, 1).
We now explain the steps taken to prove Theorem 10. The completeness part follows trivially from the completeness of the “subprotocols”.
To establish soundness, first note that by definition, as long as the protocol does not abort it produces a key of length . Therefore, it remains to verify correctness, which depends on the error correction step, and security, which is based on the privacy amplification step. To prove security we start with Lemma 11, in which we assume that the error correction step is successful. We then use it to prove soundness in Lemma 12.
Let denote the event of the DIQKD protocol not aborting and the EC protocol being successful, and let be the state at the end of the protocol, conditioned on this event.
Success of the privacy amplification step relies on the min-entropy being sufficiently large. The following lemma connects this quantity to , on which a lower bound is provided in Eq. (31) above.
Lemma 11: For any device D, let be the state generated in the protocol right before the privacy amplification step. Let be the state conditioned on not aborting the protocol and success of the EC protocol. Then, for any εEA, εEC, εs, εt ∈ (0, 1), either the protocol aborts with probability greater than 1 − εEA − εEC or
36 |
Proof sketch. Before deriving a bound on the entropy of interest, we remark that the t is chosen such that the probability that the actual number of rounds in the protocol, N, is larger than the expected number of rounds n plus t is εt. The above value for t can be derived by noticing that the sizes of the blocks are i.i.d. random variables which take values in [1, 1/γ].
The key idea of the proof is to consider the following events:
Ω: the event of not aborting in the entropy accumulation protocol. This happens when the Bell violation, calculated using Alice and Bob’s outputs (and inputs), is sufficiently high.
: Suppose Alice and Bob run the entropy accumulation protocol, and then execute the EC protocol. The event is defined by Ω and .
: the event of not aborting the DIQKD protocol and .
The state then denotes the state at the end of the entropy accumulation protocol conditioned on .
Using a sequence of chain rules for smooth entropies46 and the fact that ( and were traced out from and ρ, respectively), one can conclude
37 |
can be bounded from above. The intuition is that only when Ti = 0, which happens with probability γ. The exact bound can be calculated using the EAT and is given by
38 |
The above steps together with Eq. (31) conclude the proof.
Using Lemma 11, one can prove that our DIQKD protocol is sound.
Lemma 12: For any device D let be the state generated by the DIQKD protocol. Then either the protocol aborts with probability greater than 1 − εEA − εEC or it is (εEC + εPA + εs)-correct-and-secret while producing keys of length , as defined in Eq. (35).
Proof sketch. Assume the DIQKD protocol did not abort. We consider two cases. First, assume that the EC protocol was not successful (but did not abort). Then Alice and Bob’s final keys might not be identical. This happens with probability at most εEC. Otherwise, assume the EC protocol was successful, i.e., . In that case, Alice and Bob’s keys must be identical also after the final privacy amplification step.
The secrecy depends only on the privacy amplification step, and for universal hashing a secure key is produced as long as Eq. (34) holds. Hence, a uniform and independent key of length as in Eq. (35) is produced by the privacy amplification step unless the smooth min-entropy is not high enough or the privacy amplification protocol was not successful, which happens with probability at most εPA + εs.
According to Lemma 11, either the protocol aborts with probability greater than 1 − εEA − εEC or the entropy is sufficiently high to create the secret key.
The expected key rates appearing in Fig. 5 in the main text are given by . The key rate depends on the amount of leakage of information due to the error correction step, which in turn depends on the honest implementation of the protocol as mentioned above. To have an explicit bound, we consider the honest implementation described in the main text. Using Eq. (33) and the AEP, one can show that the amount of leakage in the error correction step is then given by
39 |
To get the optimal key rates, one should fix the parameters of interest (e.g., , , and n) and optimise over all other parameters.
DI randomness expansion
The entropy accumulation protocol can be used to perform DI randomness expansion as well. In a DI randomness expansion protocol, the honest parties start with a short seed of perfect randomness and use it to create a longer random secret string. For the purposes of randomness expansion, we may assume that the parties are co-located, therefore, the main difference from the DIQKD scheme is that there is no need for error correction (and hence there is no leakage of information due to public communication).
In order to minimise the amount of randomness required to execute the protocol, we adapt the main entropy accumulation protocol by deterministically choosing inputs in the generation rounds Xi,Yi∈{0,1}. In particular, there is no use for the input 2 to Bob’s device, and no randomness is required for the generation rounds. Aside from the last step of privacy amplification, the remainder of the protocol is essentially the same as the entropy accumulation protocol.
The plotted entropy rates in Fig. 7 are therefore also the ones relevant for a DI randomness expansion.
Since we are concerned here not only with generating randomness but also with expanding the amount of randomness initially available to the users of the protocol, we should also evaluate the total number of random bits that are needed to execute the protocol. Random bits are required to select which rounds are generation rounds, i.e., the random variable , to select inputs to the devices in the testing rounds, i.e., those for which Ti = 0, and to select the seed for the quantum proof extractor used for privacy amplification. All of these can be accounted for using standard techniques and so we omit the detailed explanation and formulas.
Data availability
No data sets were generated or analysed during the current study.
Electronic supplementary material
Acknowledgements
We thank Asher Arnon for the illustrations presented in Figs. 1, 4, and 5. R.A.F. and R.R. were supported by the Stellenbosch Institute for Advanced Study (STIAS), by the European Commission via the project “RAQUEL”, by the Swiss National Science Foundation (grant No. 200020–135048) and the National Centre of Competence in Research “Quantum Science and Technology”, by the European Research Council (grant No. 258932), and by the US Air Force Office of Scientific Research (grant No. FA9550-16-1-0245). F.D. acknowledges the financial support of the Czech Science Foundation (GA ČR) project no GA16-22211S and of the European Commission FP7 Project RAQUEL (grant No. 323970). O.F. acknowledges support from the LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program “Investissements d’Avenir” (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). T.V. was partially supported by NSF CAREER Grant CCF-1553477, an AFOSR YIP award, the IQIM, and NSF Physics Frontiers Center (NFS Grant PHY-1125565) with support of the Gordon and Betty Moore Foundation (GBMF-12500028).
Author contributions
R.A.F., F.D., O.F., R.R., and T.V. contributed equally to this work.
Competing interests
The authors declare that they have no competing financial interests.
Footnotes
Electronic supplementary material
Supplementary Information accompanies this paper at 10.1038/s41467-017-02307-4.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Ekert A, Renner R. The ultimate physical limits of privacy. Nature. 2014;507:443–447. doi: 10.1038/nature13132. [DOI] [PubMed] [Google Scholar]
- 2.Fung CHF, Qi B, Tamaki K, Lo HK. Phase-remapping attack in practical quantum-key-distribution systems. Phys. Rev. A. 2007;75:032314. doi: 10.1103/PhysRevA.75.032314. [DOI] [Google Scholar]
- 3.Lydersen L, et al. Hacking commercial quantum cryptography systems by tailored bright illumination. Nat. Photonics. 2010;4:686–689. doi: 10.1038/nphoton.2010.214. [DOI] [Google Scholar]
- 4.Weier H, et al. Quantum eavesdropping without interception: an attack exploiting the dead time of single-photon detectors. New J. Phys. 2011;13:073024. doi: 10.1088/1367-2630/13/7/073024. [DOI] [Google Scholar]
- 5.Gerhardt I, et al. Full-field implementation of a perfect eavesdropper on a quantum cryptography system. Nat. Commun. 2011;2:349. doi: 10.1038/ncomms1348. [DOI] [PubMed] [Google Scholar]
- 6.Ekert AK. Quantum cryptography based on Bell’s theorem. Phys. Rev. Lett. 1991;67:661. doi: 10.1103/PhysRevLett.67.661. [DOI] [PubMed] [Google Scholar]
- 7.Mayers, D. & Yao, A. Quantum cryptography with imperfect apparatus. In Proc. 39th Annual Symposium on Foundations of Computer Science, 1998, 503–509 (IEEE, 1998).
- 8.Barrett J, Hardy L, Kent A. No signaling and quantum key distribution. Phys. Rev. Lett. 2005;95:010503. doi: 10.1103/PhysRevLett.95.010503. [DOI] [PubMed] [Google Scholar]
- 9.Acn A, Masanes L. Certified randomness in quantum physics. Nature. 2016;540:213–219. doi: 10.1038/nature20119. [DOI] [PubMed] [Google Scholar]
- 10.Bell JS. On the Einstein-Podolsky-Rosen paradox. Physics. 1964;1:195–200. [Google Scholar]
- 11.Brunner N, Cavalcanti D, Pironio S, Scarani V, Wehner S. Bell nonlocality. Rev. Mod. Phys. 2014;86:419. doi: 10.1103/RevModPhys.86.419. [DOI] [Google Scholar]
- 12.Hensen B, et al. Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres. Nature. 2015;526:682–686. doi: 10.1038/nature15759. [DOI] [PubMed] [Google Scholar]
- 13.Shalm LK, et al. Strong loophole-free test of local realism. Phys. Rev. Lett. 2015;115:250402. doi: 10.1103/PhysRevLett.115.250402. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Giustina M, et al. Significant-loophole-free test of bell–Bell’s theorem with entangled photons. Phys. Rev. Lett. 2015;115:250401. doi: 10.1103/PhysRevLett.115.250401. [DOI] [PubMed] [Google Scholar]
- 15.Pironio S, et al. Random numbers certified by Bell’s theorem. Nature. 2010;464:1021–1024. doi: 10.1038/nature09008. [DOI] [PubMed] [Google Scholar]
- 16.Acn A, Massar S, Pironio S. Randomness versus nonlocality and entanglement. Phys. Rev. Lett. 2012;108:100402. doi: 10.1103/PhysRevLett.108.100402. [DOI] [PubMed] [Google Scholar]
- 17.Acn A, et al. Device-independent security of quantum cryptography against collective attacks. Phys. Rev. Lett. 2007;98:230501. doi: 10.1103/PhysRevLett.98.230501. [DOI] [PubMed] [Google Scholar]
- 18.Masanes L. Universally composable privacy amplification from causality constraints. Phys. Rev. Lett. 2009;102:140501. doi: 10.1103/PhysRevLett.102.140501. [DOI] [PubMed] [Google Scholar]
- 19.Pironio S, et al. Device-independent quantum key distribution secure against collective attacks. New J. Phys. 2009;11:045021. doi: 10.1088/1367-2630/11/4/045021. [DOI] [Google Scholar]
- 20.Hänggi, E., Renner, R. & Wolf, S. Efficient device-independent quantum key distribution. In Advances in Cryptology–EUROCRYPT 2010, 216–234 (Springer, 2010).
- 21.Masanes L, Renner R, Christandl M, Winter A, Barrett J. Full security of quantum key distribution from no-signaling constraints. IEEE Trans. Inf. Theory. 2014;60:4973–4986. doi: 10.1109/TIT.2014.2329417. [DOI] [Google Scholar]
- 22.Bennett, C. H. & Brassard, G. Quantum cryptography: public key distribution and coin tossing. In Proc. IEEE International Conference on Computers, Systems, and Signal Processing, Bangalore, India, 175–179 (IEEE, NY, 1984).
- 23.Renner R. Symmetry of large physical systems implies independence of subsystems. Nat. Phys. 2007;3:645–649. doi: 10.1038/nphys684. [DOI] [Google Scholar]
- 24.Christandl M, König R, Renner R. Postselection technique for quantum channels with applications to quantum cryptography. Phys. Rev. Lett. 2009;102:020504. doi: 10.1103/PhysRevLett.102.020504. [DOI] [PubMed] [Google Scholar]
- 25.Christandl M, Toner B. Finite de Finetti theorem for conditional probability distributions describing physical theories. J. Math. Phys. 2009;50:042104. doi: 10.1063/1.3114986. [DOI] [Google Scholar]
- 26.Arnon-Friedman R, Renner R. de Finetti reductions for correlations. J. Math. Phys. 2015;56:052203. doi: 10.1063/1.4921341. [DOI] [Google Scholar]
- 27.Reichardt BW, Unger F, Vazirani U. Classical command of quantum systems. Nature. 2013;496:456–460. doi: 10.1038/nature12035. [DOI] [PubMed] [Google Scholar]
- 28.Vazirani U, Vidick T. Fully device-independent quantum key distribution. Phys. Rev. Lett. 2014;113:140501. doi: 10.1103/PhysRevLett.113.140501. [DOI] [PubMed] [Google Scholar]
- 29.Miller, C. A. & Shi, Y. Robust protocols for securely expanding randomness and distributing keys using untrusted quantum devices. In Proc. 46th Annual ACM Symposium on Theory of Computing, 417–426 (ACM, 2014).
- 30.Tomamichel M, Colbeck R, Renner R. Duality between smooth min- and max-entropies. IEEE Trans. Inf. Theory. 2010;56:4674–4681. doi: 10.1109/TIT.2010.2054130. [DOI] [Google Scholar]
- 31.Tomamichel M, Colbeck R, Renner R. A fully quantum asymptotic equipartition property. IEEE Trans. Inf. Theory. 2009;55:5840–5847. doi: 10.1109/TIT.2009.2032797. [DOI] [Google Scholar]
- 32.Berta M, Christandl M, Colbeck R, Renes JM, Renner R. The uncertainty principle in the presence of quantum memory. Nat. Phys. 2010;6:659–662. doi: 10.1038/nphys1734. [DOI] [Google Scholar]
- 33.Garcia-Patron R, Cerf NJ. Unconditional optimality of gaussian attacks against continuous-variable quantum key distribution. Phys. Rev. Lett. 2006;97:190503. doi: 10.1103/PhysRevLett.97.190503. [DOI] [PubMed] [Google Scholar]
- 34.Clauser JF, Horne MA, Shimony A, Holt RA. Proposed experiment to test local hidden-variable theories. Phys. Rev. Lett. 1969;23:880. doi: 10.1103/PhysRevLett.23.880. [DOI] [Google Scholar]
- 35.Scarani V, Renner R. Quantum cryptography with finite resources: unconditional security bound for discrete-variable protocols with one-way postprocessing. Phys. Rev. Lett. 2008;100:200501. doi: 10.1103/PhysRevLett.100.200501. [DOI] [PubMed] [Google Scholar]
- 36.Wilde MM, Winter A, Yang D. Strong converse for the classical capacity of entanglement-breaking and hadamard channels via a sandwiched rényi relative entropy. Commun. Math. Phys. 2014;331:593–622. doi: 10.1007/s00220-014-2122-x. [DOI] [Google Scholar]
- 37.Müller-Lennert M, Dupuis F, Szehr O, Fehr S, Tomamichel M. On quantum rényi entropies: a new generalization and some properties. J. Math. Phys. 2013;54:122203. doi: 10.1063/1.4838856. [DOI] [Google Scholar]
- 38.Tomamichel, M. A framework for non-asymptotic quantum information theory. Preprint at https://arxiv.org/abs/1203.2142 (2012).
- 39.Müller-Lennert, M. Quantum relative Rényi entropies. Master’s thesis (ETH Zürich, 2013).
- 40.Inequalities for the moments of the eigenvalues of the Schrödinger equation and their relations to Sobolev inequalities. In Studies in Mathematical Physics: Essays in honor of Valentine Bargman, 269–303 (1976).
- 41.Brassard, G. & Salvail, L. Secret-key reconciliation by public discussion. In Advances in Cryptology EUROCRYPT 93, 410–423 (Springer, 1993).
- 42.Renner, R. & Wolf, S. Simple and tight bounds for information reconciliation and privacy amplification. In Advances in cryptology-ASIACRYPT 2005, 199–216 (Springer, 2005).
- 43.Renner, R. & König, R. Universally composable privacy amplification against quantum adversaries. In Theory of Cryptography, 407–425 (Springer, 2005).
- 44.De A, Portmann C, Vidick T, Renner R. Trevisan’s extractor in the presence of quantum side information. SIAM J. Comput. 2012;41:915–940. doi: 10.1137/100813683. [DOI] [Google Scholar]
- 45.Portmann, C. & Renner, R. Cryptographic security of quantum key distribution. Preprint at https://arxiv.org/abs/1409.3525 (2014).
- 46.Tomamichel, M. Quantum Information Processing with Finite Resources: Mathematical Foundations, Vol. 5 (Springer, 2015).
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
No data sets were generated or analysed during the current study.