Extracting from bit streams remains an active research area in modern cryptography. Recently, there has been renewed interest in debasing with structured linear mappings, namely by linear codes. Linear codes were proposed as extractors for independent but biased bits in the theory community . Decades later, they were popularized among the hardware designers , and the bounds were refined for precise entropy assessment . The recent work obtained new non-oblivious min-entropy bounds for heterogenous independent bits. This note shows how to establish condensing properties using Fourier Analysis.
Fourier Analysis Basics
For a good introduction into the state-of-the-art Fourier techniques for boolean functions, see . Recall that every \(f : \{0,1\}^n \to \mathbb{R}\) has the (Fourier) expansion as a combination of parity functions:
$$
f(x) = \sum_{S \subseteq [n]} \widehat{f}(S) \chi_S(x),
$$ with coefficients given by
$$
\widehat{f}(S) = 2^{-n}\sum_x f(x) \chi_S(x).
$$
Then comes the Parseval theorem, on isometric properties of the Fourier transform:
\begin{equation}
2^{-n} \sum_{x\in{0,1}^n} f(x)^2 = \sum_{S \subset [1\ldots n]} \widehat{f}(S)^2.
\end{equation}
Fourier analysis is often applied to probability distributions \(\mu\) on bit strings. In this case, the Parseval theorem can be restated nicely using the notion of bit bias:
$$
\left\| \mu – 2^{-n}\right\|_2 = \sqrt{2^{n} \sum_{\emptyset\not=S \subset [1\ldots n]} \widehat{\mu}(S)^2}
=\sqrt{2^{-n} \sum_{\emptyset\not=S \subset [1\ldots n]} \operatorname{bias}\left( \oplus_{i\in S} X_i \right)^2 },
$$ and in this modern form is known as (not surprisingly) the XOR lemma . Recall that the bit bias is \(\mathrm{bias}(X) = \mathbf{P}\{X=1\}-\mathbf{P}\{X=0\} \).
Linear Codes
A linear code is a subspace of \(\mathbb{F}_2^n\), generated by rows of a matrix \(G: \{0,1\}^{m\times n}\). We apply the code matrix to a source \(X_1,\ldots,X_n\) obtaining
$$
\begin{bmatrix} Y_1 \\ Y_2 \\ \vdots \\ Y_n \end{bmatrix} = G \cdot \begin{bmatrix} X_1 \\ X_2 \\ \vdots \\ X_n \end{bmatrix}.
$$ To generate useful output, the code needs more structure, for instance relatively large Hamming weights. This means that every subset of rows in the matrix should sum up (in \(\mathbb{F}_2\), that is modulo 2) to a not-too-small number, as this will ensure small biases in the XOR lemma.
Condensing and Extraction Properties
Say that \(X_i\) are IID with bias \(\delta\). Then we can express the collision entropy of the output in terms of code Hamming weights, using Fourier analysis. Indeed, applying the XOR lemma:
$$
\|\mathbf{P}_Y\|_2^2- 2^{-m} = \|\mathbf{P}_Y – 2^{-m}\|_2^2= 2^{-m}\sum_{\mathbf{0}\not=u \in \operatorname{rowspan}(G) } \operatorname{bias}( \oplus u_i X_i)^2 = 2^{-m} \sum_{\mathbf{0}\not= u} \mathbf{W}(k) \delta^{2k } ,
$$ where \(\mathbf{W}(k)\) is the number of code words with Hamming weight \(k\) (recall that the second norm can be restated as the collision entropy).
Using standard norm inequalities, we obtain the following useful bounds
\begin{align}
\begin{aligned}
\|\mathbf{P}_Y\|_{\infty} – 2^{-m} & \leq \left( 2^{-m} \sum_{\mathbf{0}\not= u} \mathbf{W}(k) \delta^{2k }\right)^{1/2} \\
\|\mathbf{P}_Y – 2^{-m}\|_{1} & \leq \left( \sum_{\mathbf{0}\not= u} \mathbf{W}(k) \delta^{2k } \right)^{1/2}.
\end{aligned}
\end{align}
for min-entropy and smooth min-entropy of the outout \(Y\) respectively.
When the code has distance \(d\), e.g. \(\mathbf{W}(k)\geq d\), then
$$
\begin{aligned}
\max_{y} \mathbf{P}\{Y=y\} & \leq 2^{-m} + \delta^{d} \\
\|\mathbf{P}_Y – 2^{-m}\|_{1} & \leq 2^{m/2}\delta^d
\end{aligned}
$$
The first bound is due to . The second bound is stronger than Corollary 3 in . The general formula for the second norm appears to be somewhat novel. It can be used to improve upon the results of from high unpredictability to near uniformity, as high min-entropy is not sufficient for all cryptographic applications.
While the derived formula is non-oblivious, computing the code distribution is a hard problem. It would be interesting to propose some approximation schemes for this purpose.