Soundness
Of a proof system. A proof system is sound if, for every
FALSE
statement, the Prover can (almost) never produce an accepting proof.
Soundness is a property of a proof system best understood as: “a malicious Prover should not be able to convince an honest Verifier of an invalid statement”.
The math version.
More formally, given a relation and the associated language , the mathematical expression for soundness looks often like the equation below: where is any malicious prover, is the honest Verifier, denotes the bit output by at the end of the interaction with for the instance and is a small number. We call the soundness error.
Statistical vs Computational Soundness. The notion of soundness described above is known as statistical soundness or information-theoretic soundness. It considers all possible adversaries, including those with unlimited resources. In most real-world applications, we are only concerned with bounded adversaries: we usually limit ourselves to adversaries running probabilistic polynomial-time algorithms. This adversarial model is formalised by the notion of an argument (rather than a “proof”) and that of computational soundness. Computational soundness is defined in the same was as information-theoretic soundness but is only required to hold against probabilistic polynomial-time adversaries.
Stronger Soundness variations
Soundness comes in several levels of strength. Above is the most basic version according to which an honest verifier should only accept a proof if it corresponds to a valid statement. The following describes increasingly stronger versions of soundness.
Knowledge Soundness
When constructing arguments of knowledge, a notion stronger than simple soundness is required: knowledge soundness. In this strengthened scenario, an honest verifier should only accept a proof if it corresponds to a valid statement and it was generated by a prover who knows a valid witness for this statement.
Formally, this is defined in terms of an extractor: an argument system is said to be knowledge sound if there exists an efficient algorithm that, given the prover’s algorithm and internal state, can extract a valid witness for the proven statement. Intuitively, this models the desired property that the prover “knows” a witness for this statement.
Simulation Knowledge Soundness (or Simulation-Extractability)
Another useful and even stronger notion of soundness is simulation-extractability. Informally, it states that an honest verifier should only accept a proof if it corresponds to a valid statement and was generated by a prover who knows a valid witness, even if the prover is allowed to observe honestly generated proofs for arbitrary statements.
Intuitively, if an argument system is simulation-extractable, then it is non-malleable: an adversary can’t observe a proof for some statement generated by a third party and maul it into a proof for another statement for which the adversary does not know a valid witness.
This captures many real-world scenarios where honestly generated proofs are made public and may be seen by potential attackers. As such, it has become the “gold standard” in terms of security notions