There are documented instances, and many more suspected instances, of standards being manipulated by attackers. This raises the question of how users of standard curves can be assured that the curves were not generated to be weak.
SafeCurves requires curve shapes for which the ECC security story is as simple as possible, for example by requiring prime fields. This still leaves various security dangers such as incompleteness and transfers, but SafeCurves checks for these dangers in a publicly verifiable way. There is still a potential lack of assurance in the following corner case:
SafeCurves requires rigidity to protect ECC users against this possibility. Rigidity is a feature of a curve-generation process, limiting the number of curves that can be generated by the process. The attacker succeeds only if some curve in this limited set is vulnerable to the secret attack. For comparison, without rigidity, the attacker can freely generate curves until finding a curve vulnerable to the secret attack.
SafeCurves classifies existing curve-generation processes into four levels of protection:
The following table reports the protection provided by various existing curves:
Isn't it safest to choose cryptographic parameters at random?
Cryptographic keys lose security when they do not have enough randomness. There is a common confusion between public parameters and public keys, creating a common myth that public parameters lose security unless they are as random as possible.
The literature contains many counterexamples to this myth. For example, there are known attacks that significantly reduce the security level of random genus-3 curves, but the attacks do not apply to specially structured genus-3 curves, namely hyperelliptic curves. As another example, in elliptic-curve cryptography one takes only unusual curves whose group orders have very large prime divisors, because uniform random curves are much less secure than these unusual curves. See 2011 Koblitz–Koblitz–Menezes (Section 11) for more subtle examples.
One should not conclude that uniform random parameters are necessarily bad: there are also examples where adding randomness to parameters is good. To see whether randomness is good or bad for the parameters of any particular system, one needs to study the details of attacks against that system.
All curves that meet the SafeCurves criteria are solidly protected against all published attacks. The criteria are computer-verified, with full details presented on this site to support third-party verification. It is conceivable that some of these curves are vulnerable to an attack that is not publicly known, but there is no basis for guessing whether any particular curve will be more or less vulnerable to attack than a random curve.
ECC users can reasonably choose their own random curves to protect against multiple-target rho attacks. However, giving a random curve to each user also has several obvious costs, and for lower costs one can take steps that have larger security benefits. This is why essentially all ECC applications use shared curves.
What do the manipulatable standards say about this?
The possibility of attackers manipulating standard curve choices was raised in the late 1990s, when NSA volunteered to "contribute" elliptic curves to the committee producing ANSI X9.62. NSA did in fact end up producing various elliptic curves later standardized by ANSI X9.62, SEC 2, and NIST FIPS 186-2; these curves were subsequently deployed in many applications.
In response to NSA's contributions, ANSI X9.62 developed "a method for selecting an elliptic curve verifiably at random", and a procedure to "verify that a given elliptic curve was indeed generated at random"; it even claims that this procedure "serves as proof (under the assumption that SHA-1 cannot be inverted) that the parameters were indeed generated at random". However, this procedure does not verify randomness; it verifies only that the curve coefficients were produced as SHA-1 output. The claimed "proof" is nonexistent. The ANSI X9.62 curve-generation method is not trivially manipulatable but it is manipulatable.
IEEE P1363 copied the same curve-generation method and stated that it allows "others to verify that the curve was indeed chosen pseudo-randomly". However, "pseudo-random" is not the same as "random", and does nothing to stop a malicious curve generator from searching through many choices of seeds. NIST correctly characterized the verification procedure for these curves as merely checking "that the coefficient b was obtained from s via the cryptographic hash function SHA-1".
SEC 2 version 1.0 copied the curves that NSA had produced for NIST, and copied the incorrect ANSI X9.62 claim that the curves were "chosen verifiably at random". SEC 2 further claimed that the curves were chosen "by repeatedly selecting a random seed and counting the number of points on the corresponding curve until appropriate parameters were found". This claim might be correct but is certainly not verifiable.
What do other sources say about this?
Shortly after the NIST curves were announced, 1999 Scott pointed out that the curves were not in fact verifiably random:
Now if the idea is to increase our confidence that these curves are therefore completely randomly selected from the vast number of possible elliptic curves and hence likely to be secure, I think this process fails. The underlying assumption is that the vast majority of curves are "good". Consider now the possibility that one in a million of all curves have an exploitable structure that "they" know about, but we don't.. Then "they" simply generate a million random seeds until they find one that generates one of "their" curves. Then they get us to use them. And remember the standard paranoia assumptions apply - "they" have computing power way beyond what we can muster. So maybe that could be 1 billion.
Scott recommended a rigid curve-generation method as an alternative, and concluded his posting as follows: "So, sigh, why didn't they do it that way? Do they want to be distrusted?"
In 2005, Brainpool identified the lack of explanation of the NSA/NIST curve seeds as a "major issue" (p.2). Brainpool required a rigid curve-generation method, as noted above, with seeds "generated in a systematic and comprehensive way" rather than being generated randomly. At one point Brainpool incorrectly described its curves as "random". At several points Brainpool described its requirement as a requirement to be "verifiably pseudo-random", but this understates what Brainpool actually requires and seems likely to cause confusion.
Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can. ... I no longer trust the constants. I believe the NSA has manipulated them through their relationships with industry.
What about rigid choices of subgroups?
For each curve considered by SafeCurves, the order ℓ of the specified subgroup of the group of rational points is prime and larger than sqrt(p)+1. A curve cannot have two different subgroups meeting this requirement.
What about rigid choices of base points?
For each curve considered by SafeCurves, the specified base point is a generator of the specified subgroup. SafeCurves does not place restrictions on the choice of this base point. If there is a "weak" base point W allowing easy computations of discrete logarithms, then ECDLP is weak for every base point: an attacker can compute log_P Q as the ratio of log_W Q and log_W P modulo ℓ. Typical ECC protocols, such as signatures, are designed to be secure for all choices of base point.
There are some protocols where base-point rigidity is important. For example, a "random" ECDLP challenge, computing the discrete logarithm of Q base P, could have a back door for the challenge creator. Certicom's ECDLP challenges use rigid generators P and Q of the subgroup to prevent Certicom from choosing the discrete logarithm in advance.
For some curves the specified base point is chosen rigidly. The usual choice is the generator with smallest possible x-coordinate for short Weierstrass curves or Montgomery curves, or smallest possible y-coordinate for Edwards curves. The reason for x vs. y here is that y(-P)=y(P) for Edwards, allowing y as a ladder coordinate, while x(-P)=x(P) for the others, allowing x as a ladder coordinate.
Brainpool multiplies this smallest point by a mostly rigid hash; Brainpool states that a small point "could possibly" allow side-channel attacks. However, there is no indication that this adds any protection against serious side-channel attacks, such as template attacks. Serious defenses, such as secret sharing, work for any choice of base point.
Version: This is version 2013.10.25 of the rigid.html web page.