While implementing the LargeIntegersPlugin I have relied on the assumption that all LargeIntegers are correctly normalized as stated in the LargePositiveIntegers class comment: --- I represent positive integers of more than 30 bits (ie, >= 1073741824). These values are beyond the range of SmallInteger, and are encoded here as an array of 8-bit digits. Care must be taken, when new values are computed, that any result that COULD BE a SmallInteger IS a SmallInteger (see normalize).
Note that the bit manipulation primitives, bitAnd:, bitShift:, etc., = and ~= run without failure (and therefore fast) if the value fits in 32 bits. This is a great help to the simulator. ---
So comparisons of *not* *normalized* LargeIntegers with SmallIntegers can lead to wrong results (see also below): I have exploited the condition, that if there is a comparison of a *normalized* LargeInteger with a SmallInteger they cannot be #= !
If we would cancel the normalization assumption, many methods could be wrong - independently if they are in the LargeIntegersPlugin or in ST.
So I think the DSA implementation has to be fixed to ensure the #normalize precondition, if Integer and its subclasses methods are used.
John: What does the author of the DSA implementation think about this topic (rising a long thread...)?
Greetings,
Stephan
Bob Arning wrote:
On Tue, 30 May 2000 14:24:49 EDT JArchibald@aol.com wrote:
and the argument to LargePositiveInteger>>new: is -18 (in statement mentioned in next paragraph). LargePositiveInteger>>new: works only if the parameter to #new: is >= 0. Since we are getting a negative value there, something in the new implementation of LargeInteger is causing this routine to go whacko.
Jerry,
Here is a bit more detail. In DigitalSignatureAlgorithm>>isProbablyPrime:, there is the following:
[snip] z _ self raise: a to: m mod: p. couldBePrime _ z = 1. [couldBePrime] whileFalse: [ z = 1 ifTrue: [Transcript show: 'failed!'; cr. ^ false]. "not prime" z = pMinusOne ifTrue: [couldBePrime _ true] ifFalse: [ (j _ j + 1) < b ifTrue: [z _ self remainder: (z * z) mod: p] ifFalse: [Transcript show: 'failed!'; cr. ^ false]]]]. "not prime" [snip]
What can happen here is that z can be a one-byte LargePositiveInteger with a value of 1. Unfortunately, this is not equal to 1 with the new plugin.
Because comparing a not normalized LargePositiveInteger with a SmallInteger leads to a wrong result.
Stephan
So couldBePrime is set to false and eventually we get to
z _ self remainder: (z * z) mod: p
with z still this "short" LargeInteger 1. Clearly, this method was trying very hard not to send #remainder:mod: with the first argument equal to 1, but this one slips right through both tests. So we wind up doing something like:
DigitalSignatureAlgorithm new remainder: 1 mod: 2342342344234
which creates the walkback that you are seeing. Now the comment in #remainder:mod: states
"Answer the remainder of dividing x by y, where x and y are both large positive integers of at least two digits."
so the walkback may be a design feature of this method and a bug elsewhere.
Note that if you change the line in #isProbablyPrime: to:
z _ (self raise: a to: m mod: p) normalize.
then DSA will generate the keys without a walkback. Whether things are really correct at that point, I don't know, but the walkback doesn't happen.
Cheers, Bob
Stephan, Bob, et al.,
Yes, I am the one who write the DSA code. Since the large integer primitives did not yet exist, I had to work pretty hard to make DSA work efficiently enough. (This code is actually used in a a Disney Online project that is slowly working its way through testing.) One of the things I did to get efficiency and simplify the primitives was to construct non-normalized large integers in places. This allows the DSA primitive code to demand that its arguments always be LargeIntegers with a minimum number of digits, even the value could have fit into a SmallInteger. The DSA code never returns such a non-normalized LargeInt; it only uses them for internal calculations.
I think there are several ways to fix the problem:
a. Re-write the DSA code to use the new large integers and eliminate the large integer DSA prims (my preference). Bob Arning has tried this. It sounds like there is some performance penalty for division and remainder (about a factor of two).
b. Make the new large integer stuff comply to the old rules, under which non-normalized LargeIntegers are allowable.
c. Fix only the case(s) that is breaking the DSA code (sounds like it is in Integer>>digitCompare:).
I wish I could say that performance doesn't matter, but in this case, it really does matter, since signature checking on a slow machine can be a bottleneck for real applications. So I'm inclined to do (c) for the short term, then look at folding the DSA divide code into the mainline LargeInteger package. Once that was done, we could use Bob's modifications to DSA to cut over to the LargeInteger package for everything. Thoughts?
Many thanks to everyone who dove into this issue, and especially to Bob Arning for doing the performance comparisons and for converting DSA to use the new large integer primitives.
Incidentally, I'm about to vanish for a six week vacation in Europe. I'm afraid I won't be able to do much about this myself before I go.
-- John
At 10:18 PM +0200 5/30/00, Stephan Rudlof wrote:
While implementing the LargeIntegersPlugin I have relied on the assumption that all LargeIntegers are correctly normalized as stated in the LargePositiveIntegers class comment:
I represent positive integers of more than 30 bits (ie, >= 1073741824). These values are beyond the range of SmallInteger, and are encoded here as an array of 8-bit digits. Care must be taken, when new values are computed, that any result that COULD BE a SmallInteger IS a SmallInteger (see normalize).
Note that the bit manipulation primitives, bitAnd:, bitShift:, etc., = and ~= run without failure (and therefore fast) if the value fits in 32 bits. This is a great help to the simulator.
So comparisons of *not* *normalized* LargeIntegers with SmallIntegers can lead to wrong results (see also below): I have exploited the condition, that if there is a comparison of a *normalized* LargeInteger with a SmallInteger they cannot be #= !
If we would cancel the normalization assumption, many methods could be wrong - independently if they are in the LargeIntegersPlugin or in ST.
So I think the DSA implementation has to be fixed to ensure the #normalize precondition, if Integer and its subclasses methods are used.
John: What does the author of the DSA implementation think about this topic (rising a long thread...)?
squeak-dev@lists.squeakfoundation.org