[squeak-dev] Re: [Pharo-project] Xtreams port to Squeak - second wave

Levente Uzonyi leves at elte.hu
Mon Oct 11 19:30:26 UTC 2010


On Mon, 11 Oct 2010, Nicolas Cellier wrote:

> 2010/10/11 Levente Uzonyi <leves at elte.hu>:
>> On Sun, 10 Oct 2010, mkobetic at gmail.com wrote:
>>
>>> Hi Nicolas,
>>>
>>> "Nicolas Cellier"<nicolas.cellier.aka.nice at gmail.com> wrote:
>>>>
>>>> Hi again,
>>>> I now have ported two more packages
>>>> - Xtreams-Transforms
>>>> - Xtreams-Substreams
>>>> and their tests
>>>
>>> This is great! I loaded your code into Pharo 1.1 and things seem to be
>>> working quite well. There was a complaint about missing SharedQueue2, I just
>>> created a dummy subclass of SharedQueue with that name and things seemed to
>>> load fine.
>>> XTRecyclingCenter seems to be subclass of XTWriteStream, it should be
>>> Object subclass, maybe a typo ?
>>>
>>>> I did not have any portability problem with those...
>>>> But that's because I did not handle the Character encoder/decoder.
>>>> Consequently, I have 8 tests failing (the Base64 related tests)
>>>
>>> I was thinking, we could implement just 'encoding: #ascii' quite easily,
>>> to make this reasonably usable at least for applications that are fine with
>>> that. We're actually contemplating about implementing our own encoders for
>>> Xtreams too. The VW ones are tied to the classic streams more than we like.
>>> You might have noticed some rather hairy parts in the encoding streams
>>> yourself, where we're trying to work around some of the issues it creates.
>>> The advantage of reusing the existing encoders was that there are quite a
>>> few of those available, so reimplementing all that would be a drag. But we
>>> can come up with a scheme where we can reimplement at least the common ones
>>> and in VW we can still preserve hooking into the old ones for the rest. I
>>> can give that a try on VW side in the meantime so you could get those for
>>> free.
>>>
>>>> Plus 4 other tests failing because of my poor implementation of
>>>> #after:do: (forking processes in a SUnit TestCase can't be that
>>>> obvious).
>>>
>>> I looked at this, and I think this is how #after:do: should look like:
>>>
>>> after: aDelay do: aBlock
>>>        "Evaluate the argument block delayed after the specified duration."
>>>
>>>        | watchdog |
>>>        watchdog := [
>>>                aDelay wait.
>>>                aBlock value.
>>>        ] newProcess.
>>>        watchdog priority: Processor userInterruptPriority.
>>>        watchdog resume.
>>>
>>> This would assume that the 2 tests calling #timeout:server:client: would
>>> use a Delay instance instead of a Duration, which I'd be fine with. However
>>> making that change doesn't quite get the tests running. It's blowing up with
>>> a DNU on the 'output close' bit in #terminate:server:client: with output
>>> being nil, which I'm having trouble figuring out. I can't find who could
>>> possibly be nilling it out. I'm somewhat struggling finding my way around
>>> the Pharo tools. The test seems to otherwise pass but the DNUs from the
>>> background process isn't nice. Also when I just click on the test in the
>>> TestRunner, I actually get four DNUs, not just one as I would expect. So I'm
>>> kinda stuck, not sure how to move forward without help from someone who
>>> knows his way around Pharo.
>>> I also get odd failure from #testWriteCollectingMultipleBufferSize, which
>>> seems to run fine (against collection) when I run the equivalent in a
>>> workspace, but strangely fails when running via the #timeout:server:client:
>>> construct, i.e. when client and server run in separate processes. Hm, now
>>> that I think of it, they sure could fail if something preempts the client,
>>> server processes at the right moment. I'll have to rethink that again.
>>
>> These problems should be solved with the latest version of CoreTests.
>>
>> Levente
>
> Thanks!

I just realized that sometimes the 100 ms timeout is not enough even on 
Cog and the test fails, 200 ms works well. It typically happens when 
running #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or 
more, to make sure it doesn't fail with SqueakVM or on slower machines.


Levente

>
> _______________________________________________
> Pharo-project mailing list
> Pharo-project at lists.gforge.inria.fr
> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>


More information about the Squeak-dev mailing list