Hi Nicolas,
"Nicolas Cellier"nicolas.cellier.aka.nice@gmail.com wrote:
Hi again, I now have ported two more packages
- Xtreams-Transforms
- Xtreams-Substreams
and their tests
This is great! I loaded your code into Pharo 1.1 and things seem to be working quite well. There was a complaint about missing SharedQueue2, I just created a dummy subclass of SharedQueue with that name and things seemed to load fine. XTRecyclingCenter seems to be subclass of XTWriteStream, it should be Object subclass, maybe a typo ?
I did not have any portability problem with those... But that's because I did not handle the Character encoder/decoder. Consequently, I have 8 tests failing (the Base64 related tests)
I was thinking, we could implement just 'encoding: #ascii' quite easily, to make this reasonably usable at least for applications that are fine with that. We're actually contemplating about implementing our own encoders for Xtreams too. The VW ones are tied to the classic streams more than we like. You might have noticed some rather hairy parts in the encoding streams yourself, where we're trying to work around some of the issues it creates. The advantage of reusing the existing encoders was that there are quite a few of those available, so reimplementing all that would be a drag. But we can come up with a scheme where we can reimplement at least the common ones and in VW we can still preserve hooking into the old ones for the rest. I can give that a try on VW side in the meantime so you could get those for free.
Plus 4 other tests failing because of my poor implementation of #after:do: (forking processes in a SUnit TestCase can't be that obvious).
I looked at this, and I think this is how #after:do: should look like:
after: aDelay do: aBlock "Evaluate the argument block delayed after the specified duration."
| watchdog | watchdog := [ aDelay wait. aBlock value. ] newProcess. watchdog priority: Processor userInterruptPriority. watchdog resume.
This would assume that the 2 tests calling #timeout:server:client: would use a Delay instance instead of a Duration, which I'd be fine with. However making that change doesn't quite get the tests running. It's blowing up with a DNU on the 'output close' bit in #terminate:server:client: with output being nil, which I'm having trouble figuring out. I can't find who could possibly be nilling it out. I'm somewhat struggling finding my way around the Pharo tools. The test seems to otherwise pass but the DNUs from the background process isn't nice. Also when I just click on the test in the TestRunner, I actually get four DNUs, not just one as I would expect. So I'm kinda stuck, not sure how to move forward without help from someone who knows his way around Pharo. I also get odd failure from #testWriteCollectingMultipleBufferSize, which seems to run fine (against collection) when I run the equivalent in a workspace, but strangely fails when running via the #timeout:server:client: construct, i.e. when client and server run in separate processes. Hm, now that I think of it, they sure could fail if something preempts the client, server processes at the right moment. I'll have to rethink that again.
Now, the easy part of the port (copy/paste) is almost ended. Once we manage a compatible way to handle pragmas, PEG Parser should port quite easily too.
I wouldn't worry about the Parser stuff at this point.
Then, the harder work begins:
- File/Socket/Pipe
- Pointers (in External Heap)
I wouldn't worry, about the external heap stuff either. It's neat, but probably not something many people will miss.
- Character encoding/decoding
I'll see if I can help with this from the VW side.
- Compression/Decompression
Is Zlib linked into the VM in Squeak too ? The compression streams are written directly against the ZLib API, so there aren't any VW specific dependencies other than how those calls are made. Similarly the crypto streams go directly against the EVP API in LibCrypto in OpenSSL, so as long as we can abstract over how those are called, the stream implementation should work as is.
If you think you can help in any of these, please tell.
If you could compile a list of changes that you'd like us to adopt on the VW side, I'd certainly look at that. I did read your posts but it's not entirely clear what you'd like to handle on Squeak side and what on VW side. A fileout would be best to avoid any confusion, but a description is fine too.
I'm also unclear about the become: discussion. Don't write streams in Squeak become: the underlying collection when they grow it ?
Cheers,
Martin
On Sun, 10 Oct 2010, mkobetic@gmail.com wrote:
Hi Nicolas,
"Nicolas Cellier"nicolas.cellier.aka.nice@gmail.com wrote:
Hi again, I now have ported two more packages
- Xtreams-Transforms
- Xtreams-Substreams
and their tests
This is great! I loaded your code into Pharo 1.1 and things seem to be working quite well. There was a complaint about missing SharedQueue2, I just created a dummy subclass of SharedQueue with that name and things seemed to load fine. XTRecyclingCenter seems to be subclass of XTWriteStream, it should be Object subclass, maybe a typo ?
I did not have any portability problem with those... But that's because I did not handle the Character encoder/decoder. Consequently, I have 8 tests failing (the Base64 related tests)
I was thinking, we could implement just 'encoding: #ascii' quite easily, to make this reasonably usable at least for applications that are fine with that. We're actually contemplating about implementing our own encoders for Xtreams too. The VW ones are tied to the classic streams more than we like. You might have noticed some rather hairy parts in the encoding streams yourself, where we're trying to work around some of the issues it creates. The advantage of reusing the existing encoders was that there are quite a few of those available, so reimplementing all that would be a drag. But we can come up with a scheme where we can reimplement at least the common ones and in VW we can still preserve hooking into the old ones for the rest. I can give that a try on VW side in the meantime so you could get those for free.
Plus 4 other tests failing because of my poor implementation of #after:do: (forking processes in a SUnit TestCase can't be that obvious).
I looked at this, and I think this is how #after:do: should look like:
after: aDelay do: aBlock "Evaluate the argument block delayed after the specified duration."
| watchdog | watchdog := [ aDelay wait. aBlock value. ] newProcess. watchdog priority: Processor userInterruptPriority. watchdog resume.
This would assume that the 2 tests calling #timeout:server:client: would use a Delay instance instead of a Duration, which I'd be fine with. However making that change doesn't quite get the tests running. It's blowing up with a DNU on the 'output close' bit in #terminate:server:client: with output being nil, which I'm having trouble figuring out. I can't find who could possibly be nilling it out. I'm somewhat struggling finding my way around the Pharo tools. The test seems to otherwise pass but the DNUs from the background process isn't nice. Also when I just click on the test in the TestRunner, I actually get four DNUs, not just one as I would expect. So I'm kinda stuck, not sure how to move forward without help from someone who knows his way around Pharo. I also get odd failure from #testWriteCollectingMultipleBufferSize, which seems to run fine (against collection) when I run the equivalent in a workspace, but strangely fails when running via the #timeout:server:client: construct, i.e. when client and server run in separate processes. Hm, now that I think of it, they sure could fail if something preempts the client, server processes at the right moment. I'll have to rethink that again.
These problems should be solved with the latest version of CoreTests.
Now, the easy part of the port (copy/paste) is almost ended. Once we manage a compatible way to handle pragmas, PEG Parser should port quite easily too.
I wouldn't worry about the Parser stuff at this point.
Then, the harder work begins:
- File/Socket/Pipe
- Pointers (in External Heap)
I wouldn't worry, about the external heap stuff either. It's neat, but probably not something many people will miss.
- Character encoding/decoding
I'll see if I can help with this from the VW side.
- Compression/Decompression
Is Zlib linked into the VM in Squeak too ? The compression streams are written directly against the ZLib API, so there aren't any VW specific dependencies other than how those calls are made. Similarly the crypto streams go directly against the EVP API in LibCrypto in OpenSSL, so as long as we can abstract over how those are called, the stream implementation should work as is.
If you think you can help in any of these, please tell.
If you could compile a list of changes that you'd like us to adopt on the VW side, I'd certainly look at that. I did read your posts but it's not entirely clear what you'd like to handle on Squeak side and what on VW side. A fileout would be best to avoid any confusion, but a description is fine too.
I'm also unclear about the become: discussion. Don't write streams in Squeak become: the underlying collection when they grow it ?
No. Squeak uses direct pointers, so it doesn't have an object table, therefore #become: is very expensive. See the Storage Management section of http://ftp.squeak.org/docs/OOPSLA.Squeak.html for details.
Levente
Cheers,
Martin
Pharo-project mailing list Pharo-project@lists.gforge.inria.fr http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
2010/10/11 Levente Uzonyi leves@elte.hu:
On Sun, 10 Oct 2010, mkobetic@gmail.com wrote:
Hi Nicolas,
"Nicolas Cellier"nicolas.cellier.aka.nice@gmail.com wrote:
Hi again, I now have ported two more packages
- Xtreams-Transforms
- Xtreams-Substreams
and their tests
This is great! I loaded your code into Pharo 1.1 and things seem to be working quite well. There was a complaint about missing SharedQueue2, I just created a dummy subclass of SharedQueue with that name and things seemed to load fine. XTRecyclingCenter seems to be subclass of XTWriteStream, it should be Object subclass, maybe a typo ?
I did not have any portability problem with those... But that's because I did not handle the Character encoder/decoder. Consequently, I have 8 tests failing (the Base64 related tests)
I was thinking, we could implement just 'encoding: #ascii' quite easily, to make this reasonably usable at least for applications that are fine with that. We're actually contemplating about implementing our own encoders for Xtreams too. The VW ones are tied to the classic streams more than we like. You might have noticed some rather hairy parts in the encoding streams yourself, where we're trying to work around some of the issues it creates. The advantage of reusing the existing encoders was that there are quite a few of those available, so reimplementing all that would be a drag. But we can come up with a scheme where we can reimplement at least the common ones and in VW we can still preserve hooking into the old ones for the rest. I can give that a try on VW side in the meantime so you could get those for free.
Plus 4 other tests failing because of my poor implementation of #after:do: (forking processes in a SUnit TestCase can't be that obvious).
I looked at this, and I think this is how #after:do: should look like:
after: aDelay do: aBlock "Evaluate the argument block delayed after the specified duration."
| watchdog | watchdog := [ aDelay wait. aBlock value. ] newProcess. watchdog priority: Processor userInterruptPriority. watchdog resume.
This would assume that the 2 tests calling #timeout:server:client: would use a Delay instance instead of a Duration, which I'd be fine with. However making that change doesn't quite get the tests running. It's blowing up with a DNU on the 'output close' bit in #terminate:server:client: with output being nil, which I'm having trouble figuring out. I can't find who could possibly be nilling it out. I'm somewhat struggling finding my way around the Pharo tools. The test seems to otherwise pass but the DNUs from the background process isn't nice. Also when I just click on the test in the TestRunner, I actually get four DNUs, not just one as I would expect. So I'm kinda stuck, not sure how to move forward without help from someone who knows his way around Pharo. I also get odd failure from #testWriteCollectingMultipleBufferSize, which seems to run fine (against collection) when I run the equivalent in a workspace, but strangely fails when running via the #timeout:server:client: construct, i.e. when client and server run in separate processes. Hm, now that I think of it, they sure could fail if something preempts the client, server processes at the right moment. I'll have to rethink that again.
These problems should be solved with the latest version of CoreTests.
Levente
Thanks!
On Mon, 11 Oct 2010, Nicolas Cellier wrote:
2010/10/11 Levente Uzonyi leves@elte.hu:
On Sun, 10 Oct 2010, mkobetic@gmail.com wrote:
Hi Nicolas,
"Nicolas Cellier"nicolas.cellier.aka.nice@gmail.com wrote:
Hi again, I now have ported two more packages
- Xtreams-Transforms
- Xtreams-Substreams
and their tests
This is great! I loaded your code into Pharo 1.1 and things seem to be working quite well. There was a complaint about missing SharedQueue2, I just created a dummy subclass of SharedQueue with that name and things seemed to load fine. XTRecyclingCenter seems to be subclass of XTWriteStream, it should be Object subclass, maybe a typo ?
I did not have any portability problem with those... But that's because I did not handle the Character encoder/decoder. Consequently, I have 8 tests failing (the Base64 related tests)
I was thinking, we could implement just 'encoding: #ascii' quite easily, to make this reasonably usable at least for applications that are fine with that. We're actually contemplating about implementing our own encoders for Xtreams too. The VW ones are tied to the classic streams more than we like. You might have noticed some rather hairy parts in the encoding streams yourself, where we're trying to work around some of the issues it creates. The advantage of reusing the existing encoders was that there are quite a few of those available, so reimplementing all that would be a drag. But we can come up with a scheme where we can reimplement at least the common ones and in VW we can still preserve hooking into the old ones for the rest. I can give that a try on VW side in the meantime so you could get those for free.
Plus 4 other tests failing because of my poor implementation of #after:do: (forking processes in a SUnit TestCase can't be that obvious).
I looked at this, and I think this is how #after:do: should look like:
after: aDelay do: aBlock "Evaluate the argument block delayed after the specified duration."
| watchdog | watchdog := [ aDelay wait. aBlock value. ] newProcess. watchdog priority: Processor userInterruptPriority. watchdog resume.
This would assume that the 2 tests calling #timeout:server:client: would use a Delay instance instead of a Duration, which I'd be fine with. However making that change doesn't quite get the tests running. It's blowing up with a DNU on the 'output close' bit in #terminate:server:client: with output being nil, which I'm having trouble figuring out. I can't find who could possibly be nilling it out. I'm somewhat struggling finding my way around the Pharo tools. The test seems to otherwise pass but the DNUs from the background process isn't nice. Also when I just click on the test in the TestRunner, I actually get four DNUs, not just one as I would expect. So I'm kinda stuck, not sure how to move forward without help from someone who knows his way around Pharo. I also get odd failure from #testWriteCollectingMultipleBufferSize, which seems to run fine (against collection) when I run the equivalent in a workspace, but strangely fails when running via the #timeout:server:client: construct, i.e. when client and server run in separate processes. Hm, now that I think of it, they sure could fail if something preempts the client, server processes at the right moment. I'll have to rethink that again.
These problems should be solved with the latest version of CoreTests.
Levente
Thanks!
I just realized that sometimes the 100 ms timeout is not enough even on Cog and the test fails, 200 ms works well. It typically happens when running #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, to make sure it doesn't fail with SqueakVM or on slower machines.
Levente
Pharo-project mailing list Pharo-project@lists.gforge.inria.fr http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
2010/10/11 Levente Uzonyi leves@elte.hu:
These problems should be solved with the latest version of CoreTests.
Levente
Thanks!
I just realized that sometimes the 100 ms timeout is not enough even on Cog and the test fails, 200 ms works well. It typically happens when running #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, to make sure it doesn't fail with SqueakVM or on slower machines.
Levente
Just proceed (in http://www.squeaksource.com/Xtreams) if you want to.
I just realized that sometimes the 100 ms timeout is not enough even on Cog and the test fails, 200 ms works well. It typically happens when running #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, to make sure it doesn't fail with SqueakVM or on slower machines.
Levente
Hmm, I increased to 1000ms, but still get some random failures...
Nicolas
On Tue, 12 Oct 2010, Nicolas Cellier wrote:
I just realized that sometimes the 100 ms timeout is not enough even on Cog and the test fails, 200 ms works well. It typically happens when running #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, to make sure it doesn't fail with SqueakVM or on slower machines.
Levente
Hmm, I increased to 1000ms, but still get some random failures...
Yes, it's a bit strange. If I change it to 200ms, every test passes, no random failures. If I increase it to 1000ms, I get random failures.
Levente
Nicolas
Pharo-project mailing list Pharo-project@lists.gforge.inria.fr http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
On Tue, 12 Oct 2010, Levente Uzonyi wrote:
On Tue, 12 Oct 2010, Nicolas Cellier wrote:
I just realized that sometimes the 100 ms timeout is not enough even on Cog and the test fails, 200 ms works well. It typically happens when running #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, to make sure it doesn't fail with SqueakVM or on slower machines.
Levente
Hmm, I increased to 1000ms, but still get some random failures...
Yes, it's a bit strange. If I change it to 200ms, every test passes, no random failures. If I increase it to 1000ms, I get random failures.
Okay, I really tracked down the cause of the problem. The tests perform simple producer-consumer scenarios, but the consumers won't wait at all. If there's nothing to consume, the test fails. The randomness come from the scheduler. The server process is started first, the client is the second. If the server process can produce enough input to the client process the test will pass. If you decrease the priority of the client process by one in #timeout:server:client:, the randomness will be gone and the tests will reliably pass, because the client won't be able to starve the server. To avoid false timeouts I had to increase the timeout value to 2000 milliseconds using SqueakVM.
I also found an issue: the process in XTTransformWriteStream doesn't terminate. If you run the tests, you'll get 12 lingering processes.
Levente
Levente
Nicolas
Pharo-project mailing list Pharo-project@lists.gforge.inria.fr http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
2010/10/12 Levente Uzonyi leves@elte.hu:
On Tue, 12 Oct 2010, Levente Uzonyi wrote:
On Tue, 12 Oct 2010, Nicolas Cellier wrote:
I just realized that sometimes the 100 ms timeout is not enough even on Cog and the test fails, 200 ms works well. It typically happens when running #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, to make sure it doesn't fail with SqueakVM or on slower machines.
Levente
Hmm, I increased to 1000ms, but still get some random failures...
Yes, it's a bit strange. If I change it to 200ms, every test passes, no random failures. If I increase it to 1000ms, I get random failures.
Okay, I really tracked down the cause of the problem. The tests perform simple producer-consumer scenarios, but the consumers won't wait at all. If there's nothing to consume, the test fails. The randomness come from the scheduler. The server process is started first, the client is the second. If the server process can produce enough input to the client process the test will pass. If you decrease the priority of the client process by one in #timeout:server:client:, the randomness will be gone and the tests will reliably pass, because the client won't be able to starve the server. To avoid false timeouts I had to increase the timeout value to 2000 milliseconds using SqueakVM.
Good!
I also found an issue: the process in XTTransformWriteStream doesn't terminate. If you run the tests, you'll get 12 lingering processes.
Ah yes, the #drainBuffer process...
Nicolas
Levente
On Oct 10, 2010, at 5:18 PM, mkobetic@gmail.com wrote:
Hi Nicolas,
"Nicolas Cellier"nicolas.cellier.aka.nice@gmail.com wrote:
Hi again, I now have ported two more packages
- Xtreams-Transforms
- Xtreams-Substreams
and their tests
If you get Interpreting and Marshaling going we'll have ST-ST communication between Pharo<->Squeak<->VisualWorks with a fast binary protocol. I suspect some trickery will be required to deal with namespaces, as the protocol assumes 'full names' for classes when matching between the ends.
Michael
On Oct 10, 2010, at 5:18 PM, mkobetic@gmail.com wrote:
Hi Nicolas,
"Nicolas Cellier"nicolas.cellier.aka.nice@gmail.com wrote:
Hi again, I now have ported two more packages
- Xtreams-Transforms
- Xtreams-Substreams
and their tests
If you get Interpreting and Marshaling going we'll have ST-ST communication between Pharo<->Squeak<->VisualWorks with a fast binary protocol. I suspect some trickery will be required to deal with namespaces, as the protocol assumes 'full names' for classes when matching between the ends.
Michael
squeak-dev@lists.squeakfoundation.org