<div dir="ltr">Hi Nicolas,<br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Mar 29, 2016 at 2:36 PM, Nicolas Cellier <span dir="ltr"><<a href="mailto:nicolas.cellier.aka.nice@gmail.com" target="_blank">nicolas.cellier.aka.nice@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> <br><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">2016-03-27 15:39 GMT+02:00 Nicolas Cellier <span dir="ltr"><<a href="mailto:nicolas.cellier.aka.nice@gmail.com" target="_blank">nicolas.cellier.aka.nice@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div>To be more specific,<br><br></div><div>The first thing to do would be to confirm that the KernelNumber tests fail with a squeak.stack.v3 VM compiled with head revision of COG VM branch, whatever OS.<br><br></div>Then knowing from which SVN version of COG VM branch the KernelNumber the tests start failing would be nice.<br><br></div>The bisect job is to:<br></div>- iterate on version number (whatever strategy, bsearch or something)<br></div>- checkout VM sources<br></div>- compile the build.favouriteOS/squeak.stack.v3<br></div>- run a v3 image with the generated VM and launch KernelNumber tests<br><br></div>Really a job for a jenkins bot, travis bot or whatever...<br></div><div>The next good thing would be to give a little love to <a href="http://build.squeak.org" target="_blank">build.squeak.org</a> or anyother similar solution.<br></div><div>I only see red disks on this site...<br></div></div><div><div><div class="gmail_extra"><br></div></div></div></blockquote><div><br></div><div>Follow up: I did bissect manually:<br>3648 (VMMaker.oscog-eem.nice.1729) OK<br>3649 (VMMaker.oscog-eem.1740) Not OK<br><br></div><div>So something went wrong for V3 between these two versions.<br></div><div>At the same time, it works for spur.<br></div><div><br></div><div>Spur objects are 8 bytes aligned, v3 objects are 4 bytes aligned...<br></div><div>So fetching 64 bits from V3 MUST be decomposed into 2 32 bits fetch to avoid a segfault related to misalign fetch.<br><br></div><div>OK, let's look how it is performed:<br><br>fetchLong64: longIndex ofObject: oop<br> <returnTypeC: #sqLong><br> ^self cppIf: BytesPerWord = 8<br> ifTrue: [self long64At: oop + self baseHeaderSize + (longIndex << 3)]<br> ifFalse:<br> [self cppIf: VMBIGENDIAN<br> ifTrue: [((self long32At: oop + self baseHeaderSize + (longIndex << 3)) asUnsignedLongLong << 32)<br> + (self long32At: oop + self baseHeaderSize + (longIndex << 3 + 4))]<br> ifFalse: [(self long32At: oop + self baseHeaderSize + (longIndex << 3))<br> + ((self long32At: oop + self baseHeaderSize + (longIndex << 3 + 4)) asUnsignedLongLong << 32)]]<br><br></div><div>AH AH! Operation is:<br> low + (((unsigned) high) << 32)<br><br></div><div>With low declared as SIGNED (long32At is signed GRRR).<br></div><div>What if bit 32 of low is 1? then the compiler will perform:<br><br></div><div>(unsigned long) low + ...<br><br></div><div>AND THIS WILL DO A SIGN EXTENSION...<br></div><div><br></div><div>It's not that my code was false...<br></div><div>It's just that it uncovered a bug in fetchLong64:ofObject:<br><br></div><div>That cost me many hours, but that enforce my conviction about signedness...<br></div><div>I would much much much prefer to call unsignedLong32At because it's what I mean 9 times out of 10: get the magnitude...<br></div></div></div></div></blockquote><div><br></div><div>Let's add them. It would be great to have them all consistent too. But at least let's add unsignedLong32At:[put:]. Wait. In the simulator longAt:[put:] et al are unsigned. So we have three choices:</div><div><br></div><div>a) live with it</div><div>b) make longAt:[put:] long32At:put: et al unsigned in C, and add signedLongAt:[put:] et al. </div><div>c) make the simulator's longAt:[put:] long32At:put: signed and then add unsignedLongAt:[put:] et al</div><div><br></div><div>Any others?</div><div><br></div><div>I think a) is unwise. The simulator and C should agree. Nicolas, I can join your experience in finding that debugging these issues is extremely time consuming.</div><div><br></div><div>My preference is for b); some uses where the type should be signed will by located by C compiler warnings; we hope that VM tests will catch the others)<br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div><div><div class="gmail_extra"><div class="gmail_quote">2016-03-27 0:40 GMT+01:00 Nicolas Cellier <span dir="ltr"><<a href="mailto:nicolas.cellier.aka.nice@gmail.com" target="_blank">nicolas.cellier.aka.nice@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div>Hi,<br></div>before I continue, i've noticed that the large integer multiply seems broken on v3 object memory (cog & stack)<br></div><div>Note that this does not happen on Spur.<br></div></div><br></div>This is independent of my recent changes of LargeIntegers plugin as it happens BEFORE these changes and is related to primitive 29 rather than to the plugin...<br>Here are the symptoms:<br></div><br>halfPower := 10000.<br>s := 111111111111111.<br>head := s quo: halfPower.<br>tail := s - (head * halfPower).<br></div>{<br> head as: ByteArray.<br> (1 to: halfPower digitLength) collect: [:i | halfPower digitAt: i] as: ByteArray.<br></div> (head*halfPower) as: ByteArray.<br>}.<br><div><div><br></div><div>the correct result is:<br> #(#[199 25 70 150 2] #[16 39] #[112 237 78 18 14 101])<br><br></div><div>the wrong result I obtained with SVN revision 3651 compiled by myself is:<br> #(#[199 25 70 150 2] #[16 39] #[112 237 78 18 254 61])<br><br></div><div>The most significant bits (above 32) are wrong...<br></div><div>The pattern I obtain is (with most significant bit put back left)<br><br></div><div>2r00111101 << 8 + 2r11111110 "wrong result"<br></div><div>2r01100101 << 8 + 2r00001110 "Correct result"<br><br></div><div>I completely fail to infer what's going on from this pattern...<br></div><br><div>This is on MacOSX clang --version<br>Apple LLVM version 7.3.0 (clang-703.0.29)<br>Target: x86_64-apple-darwin15.4.0<br><br></div><div>This goes thru primitiveMultiplyLargeIntegers (29)<br>oopResult := self magnitude64BitIntegerFor: result neg: aIsNegative ~= bIsNegative.<br>-> sz > 4<br> ifTrue: [objectMemory storeLong64: 0 ofObject: newLargeInteger withValue: magnitude]<br></div><div>(which I changed recently)<br><br></div><div>then:<br>storeLong64: longIndex ofObject: oop withValue: value<br> <var: #value type: #sqLong><br> self flag: #endianness.<br> self long32At: oop + self baseHeaderSize + (longIndex << 3) put: (self cCode: [value] inSmalltalk: [value bitAnd: 16rFFFFFFFF]);<br> long32At: oop + self baseHeaderSize + (longIndex << 3) + 4 put: (value >> 32).<br> ^value<br><br></div><div>I don't see anything wrong with this code...<br>Well, using a shift on signed value is not that good, but it works for at least 3 reasons:<br>- we throw the signBit extension away<br>- slang inlining misses the signedness difference, and the generated C code is correct.<br>- Anyway, in our case, the sign bit was 0...<br><br>Previous implementation in magnitude64BitIntegerFor:neg: was:<br>sz > 4 ifTrue:<br> [objectMemory storeLong32: 1 ofObject: newLargeInteger withValue: magnitude >> 32].<br> objectMemory<br> storeLong32: 0<br> ofObject: newLargeInteger<br> withValue: (self cCode: [magnitude] inSmalltalk: [magnitude bitAnd: 16rFFFFFFFF])<br></div><div><div><div><br></div><div>Not much different, except that the high 32 bits and low 32 bits are written in a different order...<br><br>If I had a server I'd like to bisect<br><div>- from which version does this happens<br></div><div>- for which OS<br></div><div>- for which compiler<br><br><div>Without such information, I think I'll have to debug it either thru simulator or directly in gdb, but I feel like I'm really losing my time :(<br><br></div><div>And I've got a 2nd problem like this one...<br></div><div><br></div><br></div><div><br></div><br></div></div></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div></div>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><span style="font-size:small;border-collapse:separate"><div>_,,,^..^,,,_<br></div><div>best, Eliot</div></span></div></div></div>
</div></div>