From hilaire at drgeo.eu Fri Jan 1 10:09:53 2016 From: hilaire at drgeo.eu (Hilaire) Date: Fri Jan 1 10:10:04 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: References: <56853583.6040800@drgeo.eu> Message-ID: Le 31/12/2015 15:42, Sven Van Caekenberghe a ?crit : > A possible solution is to somehow start with a WideString from the beginning (or force it before doing all elements one by one). In the hack you are suggesting, does it not required the whole contents of the rendered page html code to be put as WideString? Indeed, hacking only my object #printOn: method to send WideString to the stream does not help. Hilaire -- Dr. Geo http://drgeo.eu http://google.com/+DrgeoEu From arning315 at comcast.net Fri Jan 1 11:54:45 2016 From: arning315 at comcast.net (Bob Arning) Date: Fri Jan 1 11:56:47 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: References: <56853583.6040800@drgeo.eu> Message-ID: <56866905.1090908@comcast.net> What worked for me once upon a time was to convert the euro character to a series of utf-8 bytes. http://www.fileformat.info/info/unicode/char/20aC/index.htm so, for euro-currency signU+20A0 you would need a 3-byte string whose characters in hex were: 0xE2 0x82 0xAC In squeak, I used this utf8ForCodePoint: cp "=== SEUtils utf8ForCodePoint: 16r2190 16r2190 radix: 2 '10000110010000' 16 U+FFFF 1110xxxx 10xxxxxx 10xxxxxx ===" | bits answer | bits := (cp radix: 2) padded: #left to: 16 with: $0. answer := String new. { '2r1110',(bits copyFrom: 1 to: 4). '2r10',(bits copyFrom: 5 to: 10). '2r10',(bits copyFrom: 11 to: 16). } do: [ :radix2 | answer := answer, (String with: (Character value: radix2 asNumber)). ]. ^answer On 1/1/16 5:09 AM, Hilaire wrote: > Le 31/12/2015 15:42, Sven Van Caekenberghe a ?crit : >> A possible solution is to somehow start with a WideString from the beginning (or force it before doing all elements one by one). > In the hack you are suggesting, does it not required the whole contents > of the rendered page html code to be put as WideString? > Indeed, hacking only my object #printOn: method to send WideString to > the stream does not help. > > Hilaire > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160101/d5eec64e/attachment.htm From johan at inceptive.be Fri Jan 1 12:00:12 2016 From: johan at inceptive.be (Johan Brichau) Date: Fri Jan 1 12:00:15 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: References: <56853583.6040800@drgeo.eu> Message-ID: Hi Hilaire, If you are rendering a euro symbol on an html page, it?s probably better to use the html character code: html encodeCharacter: Character euro -or- html html: '€? It would also be helpful if you report the performance issue you have with some canonical code to reproduce it on https://github.com/SeasideSt/Seaside/issues It might be useful to investigate what happens and if it can be prevented. Though, the page document always needs to be encoded as a utf8 document, so I assume internally a bytestring is used. happy new year Johan > On 01 Jan 2016, at 11:09, Hilaire wrote: > > Le 31/12/2015 15:42, Sven Van Caekenberghe a ?crit : >> A possible solution is to somehow start with a WideString from the beginning (or force it before doing all elements one by one). > > In the hack you are suggesting, does it not required the whole contents > of the rendered page html code to be put as WideString? > Indeed, hacking only my object #printOn: method to send WideString to > the stream does not help. > > Hilaire > > -- > Dr. Geo > http://drgeo.eu > http://google.com/+DrgeoEu > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160101/ea30bcef/attachment-0001.htm From hilaire at drgeo.eu Fri Jan 1 15:51:41 2016 From: hilaire at drgeo.eu (Hilaire) Date: Fri Jan 1 15:52:00 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: References: <56853583.6040800@drgeo.eu> Message-ID: Le 01/01/2016 13:00, Johan Brichau a ?crit : > Hi Hilaire, > > If you are rendering a euro symbol on an html page, it?s probably > better to use the html character code: > > html encodeCharacter: Character euro > -or- > html html: '€? Yes, but I want to do it at the object level with its text representation. So I can have a nice view of this object from both the web browser and the Pharo environment. > > It would also be helpful if you report the performance issue you have > with some canonical code to reproduce it > on https://github.com/SeasideSt/Seaside/issuesIt might be useful to > investigate what happens and if it can be prevented. Though, the page > document always needs to be encoded as a utf8 document, so I assume > internally a bytestring is used. Posted there https://github.com/SeasideSt/Seaside/issues/862 For me it is all utf8, but it looks like not all along the way... Thanks Hilaire From hilaire at drgeo.eu Fri Jan 1 16:19:41 2016 From: hilaire at drgeo.eu (Hilaire) Date: Fri Jan 1 16:19:43 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: <56866905.1090908@comcast.net> References: <56853583.6040800@drgeo.eu> <56866905.1090908@comcast.net> Message-ID: <5686A71D.6010406@drgeo.eu> Thanks for the tips. I naively try this: printOn: aStream << (amount asScaledDecimal: 2) greaseString << ' ' << (ByteArray with: 16rE2 with: 16r82 with: 16rAC) asString But somehow I must be missing something because it does not work as expected. I got garbage currency :) 514.25 ??? Hilaire Le 01/01/2016 12:54, Bob Arning a ?crit : > What worked for me once upon a time was to convert the euro character > to a series of utf-8 bytes. > > http://www.fileformat.info/info/unicode/char/20aC/index.htm > > so, for euro-currency sign U+20A0 > you > would need a 3-byte string whose characters in hex were: 0xE2 0x82 0xAC > > In squeak, I used this > > utf8ForCodePoint: cp > > "=== > SEUtils utf8ForCodePoint: 16r2190 > > 16r2190 radix: 2 '10000110010000' > > 16 U+FFFF 1110xxxx 10xxxxxx 10xxxxxx > ===" > > | bits answer | > > bits := (cp radix: 2) padded: #left to: 16 with: $0. > answer := String new. > { > '2r1110',(bits copyFrom: 1 to: 4). > '2r10',(bits copyFrom: 5 to: 10). > '2r10',(bits copyFrom: 11 to: 16). > } do: [ :radix2 | > answer := answer, (String with: (Character value: radix2 > asNumber)). > ]. > > ^answer > > On 1/1/16 5:09 AM, Hilaire wrote: >> Le 31/12/2015 15:42, Sven Van Caekenberghe a ?crit : >>> A possible solution is to somehow start with a WideString from the beginning (or force it before doing all elements one by one). >> In the hack you are suggesting, does it not required the whole contents >> of the rendered page html code to be put as WideString? >> Indeed, hacking only my object #printOn: method to send WideString to >> the stream does not help. >> >> Hilaire >> > > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside -- Dr. Geo http://drgeo.eu http://google.com/+DrgeoEu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160101/dad86ca3/attachment.htm From arning315 at comcast.net Fri Jan 1 17:16:00 2016 From: arning315 at comcast.net (Bob Arning) Date: Fri Jan 1 17:18:02 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: <5686A71D.6010406@drgeo.eu> References: <56853583.6040800@drgeo.eu> <56866905.1090908@comcast.net> <5686A71D.6010406@drgeo.eu> Message-ID: <5686B450.2010102@comcast.net> Sounds like an encoding issue. Your code inserted into my app produced a euro sign where I had been printing a fancy leftarrow. This is a much older seaside and has a GRNullCodec as the codec of the requestContext. If you are using a different codec, then it may be thinking your utf8 euro sign is really 3 oddball characters and rendering them separately. Perhaps you can do something with the codec (like use a null?). On 1/1/16 11:19 AM, Hilaire wrote: > > Thanks for the tips. > > I naively try this: > > printOn: > aStream << (amount asScaledDecimal: 2) greaseString > << ' ' << (ByteArray with: 16rE2 with: 16r82 with: 16rAC) asString > > > But somehow I must be missing something because it does not work as > expected. > I got garbage currency :) 514.25 ??? > > Hilaire > > > Le 01/01/2016 12:54, Bob Arning a ?crit : >> What worked for me once upon a time was to convert the euro character >> to a series of utf-8 bytes. >> >> http://www.fileformat.info/info/unicode/char/20aC/index.htm >> >> so, for euro-currency signU+20A0 >> you >> would need a 3-byte string whose characters in hex were: 0xE2 0x82 0xAC >> >> In squeak, I used this >> >> utf8ForCodePoint: cp >> >> "=== >> SEUtils utf8ForCodePoint: 16r2190 >> >> 16r2190 radix: 2 '10000110010000' >> >> 16 U+FFFF 1110xxxx 10xxxxxx 10xxxxxx >> ===" >> >> | bits answer | >> >> bits := (cp radix: 2) padded: #left to: 16 with: $0. >> answer := String new. >> { >> '2r1110',(bits copyFrom: 1 to: 4). >> '2r10',(bits copyFrom: 5 to: 10). >> '2r10',(bits copyFrom: 11 to: 16). >> } do: [ :radix2 | >> answer := answer, (String with: (Character value: radix2 >> asNumber)). >> ]. >> >> ^answer >> >> On 1/1/16 5:09 AM, Hilaire wrote: >>> Le 31/12/2015 15:42, Sven Van Caekenberghe a ?crit : >>>> A possible solution is to somehow start with a WideString from the beginning (or force it before doing all elements one by one). >>> In the hack you are suggesting, does it not required the whole contents >>> of the rendered page html code to be put as WideString? >>> Indeed, hacking only my object #printOn: method to send WideString to >>> the stream does not help. >>> >>> Hilaire >>> >> >> >> >> _______________________________________________ >> seaside mailing list >> seaside@lists.squeakfoundation.org >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > > -- > Dr. Geo > http://drgeo.eu > http://google.com/+DrgeoEu > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160101/4fecf77f/attachment.htm From hilaire at drgeo.eu Fri Jan 1 18:14:16 2016 From: hilaire at drgeo.eu (Hilaire) Date: Fri Jan 1 18:14:34 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: <5686B450.2010102@comcast.net> References: <56853583.6040800@drgeo.eu> <56866905.1090908@comcast.net> <5686A71D.6010406@drgeo.eu> <5686B450.2010102@comcast.net> Message-ID: Le 01/01/2016 18:16, Bob Arning a ?crit : > Sounds like an encoding issue. Your code inserted into my app produced > a euro sign where I had been printing a fancy leftarrow. This is a > much older seaside and has a GRNullCodec as the codec of the > requestContext. If you are using a different codec, then it may be > thinking your utf8 euro sign is really 3 oddball characters and > rendering them separately. Perhaps you can do something with the codec > (like use a null?). Understood the general idea but not sure how to proceed. Thanks for the tips anyway. Hilaire -- Dr. Geo http://drgeo.eu http://google.com/+DrgeoEu From johan at inceptive.be Fri Jan 1 21:13:30 2016 From: johan at inceptive.be (Johan Brichau) Date: Fri Jan 1 21:13:36 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: References: <56853583.6040800@drgeo.eu> Message-ID: <52D43B45-09E2-4B18-BBA2-B097B12976FD@inceptive.be> > On 01 Jan 2016, at 16:51, Hilaire wrote: > > Posted there https://github.com/SeasideSt/Seaside/issues/862 > For me it is all utf8, but it looks like not all along the way... Thanks for posting that. Since the slowdown is only in the example where you concatenate the euro symbol to the string before writing it on the canvas, it?s definitely not something we can/should solve in Seaside. I wanted to make sure if there is anything that slows down the page rendering when there are widestrings being put on the canvas, but it?s clearly not the case. I think the only solution is to separate the rendering of the object on a Seaside canvas and the Pharo object inspector. I.e. write a #renderOn: html method on the object which implemented the optimal way of rendering it on a Seaside canvas. cheers, Johan From sven at stfx.eu Sat Jan 2 00:22:14 2016 From: sven at stfx.eu (Sven Van Caekenberghe) Date: Sat Jan 2 00:22:16 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: <52D43B45-09E2-4B18-BBA2-B097B12976FD@inceptive.be> References: <56853583.6040800@drgeo.eu> <52D43B45-09E2-4B18-BBA2-B097B12976FD@inceptive.be> Message-ID: <30E230D3-A7BD-4BAE-80A3-A501E3C40A4E@stfx.eu> This code should illustrate what is happening: [ String streamContents: [ :out | out << 'Pricelist'; cr. 1 to: 1000 do: [ :each | out << 'Item nr. ' << each << ' costs EUR' << (each * 100); cr ] ] ] bench. "'1,662 per second'" [ String streamContents: [ :out | out << 'Pricelist'; cr. 1 to: 1000 do: [ :each | out << 'Item nr. ' << each << ' costs ?' << (each * 100); cr ] ] ] bench. "'710.058 per second'" [ String streamContents: [ :out | out << 'Pricelist'; cr. 1 to: 1000 do: [ :each | out << (String streamContents: [ :str | str << 'Item nr. ' << each << ' costs ?' << (each * 100); cr ]) ] ] ] bench. "'156.350 per second'" I suspect you are in the third case: by calling #printString for each item, you implicitly invoke the #become: (a factor 10 slowdown) - In the second case, this happens only once. Johan's suggestion is an easy solution, but basically comes down to 'do not use anything outside Latin1 unless you HTML encode it', which is (1) quite limited (2) should not be your job. Generating raw UTF-8 yourself is not a good idea I think (it goes against the way Strings are supposed to work). I use the Euro sign and other Unicode characters in a Seaside web app without encoding them and I did not notice any slowdowns (http://store.audio359.eu). > On 01 Jan 2016, at 22:13, Johan Brichau wrote: > > >> On 01 Jan 2016, at 16:51, Hilaire wrote: >> >> Posted there https://github.com/SeasideSt/Seaside/issues/862 >> For me it is all utf8, but it looks like not all along the way... > > Thanks for posting that. > Since the slowdown is only in the example where you concatenate the euro symbol to the string before writing it on the canvas, it?s definitely not something we can/should solve in Seaside. > I wanted to make sure if there is anything that slows down the page rendering when there are widestrings being put on the canvas, but it?s clearly not the case. > > I think the only solution is to separate the rendering of the object on a Seaside canvas and the Pharo object inspector. > I.e. write a #renderOn: html method on the object which implemented the optimal way of rendering it on a Seaside canvas. > > cheers, > Johan_______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From johan at inceptive.be Sat Jan 2 08:45:33 2016 From: johan at inceptive.be (Johan Brichau) Date: Sat Jan 2 08:45:38 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: <30E230D3-A7BD-4BAE-80A3-A501E3C40A4E@stfx.eu> References: <56853583.6040800@drgeo.eu> <52D43B45-09E2-4B18-BBA2-B097B12976FD@inceptive.be> <30E230D3-A7BD-4BAE-80A3-A501E3C40A4E@stfx.eu> Message-ID: > On 02 Jan 2016, at 01:22, Sven Van Caekenberghe wrote: > > Johan's suggestion is an easy solution, but basically comes down to 'do not use anything outside Latin1 unless you HTML encode it', which is (1) quite limited (2) should not be your job. No, that?s not what I am suggesting and there is absolutely no issue when adding widestrings to the Seaside document when generating. That is what I wanted to make sure when I asked Hilaire to report the issue on github. As Hilaire points out himself (and your code snippets also demonstrate that), the issue is the string concatenation before you put the string on the Seaside document: Time millisecondsToRun: [WAHtmlCanvas builder render: [ :html | 1500 timesRepeat: [ html text: 'hello', '?' ] ]]. -> +- 3000ms on my machine Time millisecondsToRun: [WAHtmlCanvas builder render: [ :html | 1500 timesRepeat: [ html text: 'hello'; text: '?' ] ]]. -> +- 5ms on my machine My suggestion is not to use the #printOn: method for rendering the object on a Seaside canvas, but rather implement a #renderOn: method that avoids concatenating the String and WideString instances. cheers Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160102/bc2c6905/attachment.htm From hilaire at drgeo.eu Sat Jan 2 09:12:55 2016 From: hilaire at drgeo.eu (Hilaire) Date: Sat Jan 2 09:12:57 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: References: <56853583.6040800@drgeo.eu> <52D43B45-09E2-4B18-BBA2-B097B12976FD@inceptive.be> <30E230D3-A7BD-4BAE-80A3-A501E3C40A4E@stfx.eu> Message-ID: <56879497.8040008@drgeo.eu> Le 02/01/2016 09:45, Johan Brichau a ?crit : > My suggestion is not to use the #printOn: method for rendering the > object on a Seaside canvas, but rather implement a #renderOn: method > that avoids concatenating the String and WideString instances. > Thanks for the tip it helps. However use of renderOn: will have some large implication on my Seaside components. Each use case of this object in component need to be rewritten to use only render:. I don't like much this idea, it makes the written code less elegant and consistent. Next, I understand now why I noticed this important slow down from the Pharo Inspector when browsing such collection object with ? symbol on the EyeTreeInspector. I was first believing the slow down was because of the Inspector, but it is the same problem, which appear more clearly as a limitation of Pharo itself. I will resume discussion on the Pharo user list then. Thanks Hilaire -- Dr. Geo http://drgeo.eu http://google.com/+DrgeoEu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160102/f46d17ef/attachment.htm From sven at stfx.eu Sat Jan 2 09:13:28 2016 From: sven at stfx.eu (Sven Van Caekenberghe) Date: Sat Jan 2 09:13:30 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: References: <56853583.6040800@drgeo.eu> <52D43B45-09E2-4B18-BBA2-B097B12976FD@inceptive.be> <30E230D3-A7BD-4BAE-80A3-A501E3C40A4E@stfx.eu> Message-ID: <2B94FDBE-BAD5-428D-A037-B6ECC1BB1BD5@stfx.eu> > On 02 Jan 2016, at 09:45, Johan Brichau wrote: > > >> On 02 Jan 2016, at 01:22, Sven Van Caekenberghe wrote: >> >> Johan's suggestion is an easy solution, but basically comes down to 'do not use anything outside Latin1 unless you HTML encode it', which is (1) quite limited (2) should not be your job. > > No, that?s not what I am suggesting and there is absolutely no issue when adding widestrings to the Seaside document when generating. That is what I wanted to make sure when I asked Hilaire to report the issue on github. > As Hilaire points out himself (and your code snippets also demonstrate that), the issue is the string concatenation before you put the string on the Seaside document: > > Time millisecondsToRun: [WAHtmlCanvas builder render: [ :html | > 1500 timesRepeat: [ html text: 'hello', '?' ] ]]. > > -> +- 3000ms on my machine > > Time millisecondsToRun: [WAHtmlCanvas builder render: [ :html | > 1500 timesRepeat: [ html text: 'hello'; text: '?' ] ]]. > > -> +- 5ms on my machine > > My suggestion is not to use the #printOn: method for rendering the object on a Seaside canvas, but rather implement a #renderOn: method that avoids concatenating the String and WideString instances. That is an excellent summary of the issue, and a good solution indeed. > cheers > Johan > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From jtuchel at objektfabrik.de Sat Jan 2 10:04:08 2016 From: jtuchel at objektfabrik.de (jtuchel@objektfabrik.de) Date: Sat Jan 2 10:04:11 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: <56879497.8040008@drgeo.eu> References: <56853583.6040800@drgeo.eu> <52D43B45-09E2-4B18-BBA2-B097B12976FD@inceptive.be> <30E230D3-A7BD-4BAE-80A3-A501E3C40A4E@stfx.eu> <56879497.8040008@drgeo.eu> Message-ID: <5687A098.80605@objektfabrik.de> Hilaire, Am 02.01.16 um 10:12 schrieb Hilaire: > Le 02/01/2016 09:45, Johan Brichau a ?crit : >> My suggestion is not to use the #printOn: method for rendering the >> object on a Seaside canvas, but rather implement a #renderOn: method >> that avoids concatenating the String and WideString instances. >> > Thanks for the tip it helps. > However use of renderOn: will have some large implication on my > Seaside components. Each use case of this object in component need to > be rewritten to use only render:. I don't like much this idea, it > makes the written code less elegant and consistent. since the behavior isn't consistent, I don't see why you should bother... And just to make sure we're not sending you in the wrong direction: Johan surely doesn't suggest turning your model objects into WAComponents. You don't have to use #render:, and maybe you shouldn't even call the method anything like #render, just use some method that accepts a renderer as parameter and renders html onto it. OTOH, speaking of the inspector: not sure about Pharo, but in VA there is #printOn: and #debugPrintOn:, so you could also choose to implement special behavior for inspecting/debugging purposes. This would probably not really solve the concatenation problem, so maybe not even a relevant note ;-) On a side note: I have decided to not use printOn: for application purposes if ever possible for similar reasons since I use Seaside. Too often I had to implement different representation logic for the application and myself as the developer. Having to go through a horribly long list of senders of #printString, #printOn: and sometimes even #asString in order to decide whether it's one or the other several times was an exercise that asks for avoidance... Joachim -- ----------------------------------------------------------------------- Objektfabrik Joachim Tuchel mailto:jtuchel@objektfabrik.de Fliederweg 1 http://www.objektfabrik.de D-71640 Ludwigsburg http://joachimtuchel.wordpress.com Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160102/24317008/attachment-0001.htm From hilaire at drgeo.eu Sat Jan 2 11:02:24 2016 From: hilaire at drgeo.eu (Hilaire) Date: Sat Jan 2 11:02:43 2016 Subject: [Seaside] Re: TableReport slow down In-Reply-To: <5687A098.80605@objektfabrik.de> References: <56853583.6040800@drgeo.eu> <52D43B45-09E2-4B18-BBA2-B097B12976FD@inceptive.be> <30E230D3-A7BD-4BAE-80A3-A501E3C40A4E@stfx.eu> <56879497.8040008@drgeo.eu> <5687A098.80605@objektfabrik.de> Message-ID: Le 02/01/2016 11:04, jtuchel@objektfabrik.de a ?crit : > And just to make sure we're not sending you in the wrong direction: > Johan surely doesn't suggest turning your model objects into > WAComponents. You don't have to use #render:, and maybe you shouldn't > even call the method anything like #render, just use some method that > accepts a renderer as parameter and renders html onto it. Hi Joachim If I understood correctly, 1. I need to implement my own*renderOn:* method for my object, (with code like: *html text: amount; text: ' '; text: '?' *) 2. then from any component I render this object with *html render: myObject.* This is what I tested and it provided normal expected rendering time. But it seems you mean something a bit different: no need to use #render: Am I missing something? Thanks Hilaire -- Dr. Geo http://drgeo.eu http://google.com/+DrgeoEu From estebanlm at gmail.com Tue Jan 12 15:44:06 2016 From: estebanlm at gmail.com (Esteban Lorenzano) Date: Tue Jan 12 15:44:09 2016 Subject: [Seaside] OpenID/OAuth implementation? Message-ID: <70C74295-4834-49E5-8CE6-0B9CACA9D4C8@gmail.com> Hi, With the future shutdown of mozilla persona [1], I started to dig onto possibilities? and honestly I do not want to code yet-another-authentication-component? So I wonder? does someone implemented something like OpenID or OAuth and is available? thanks, Esteban [1] https://mail.mozilla.org/pipermail/persona-notices/2016/000005.html From jvdsandt at gmail.com Tue Jan 12 16:59:54 2016 From: jvdsandt at gmail.com (Jan van de Sandt) Date: Tue Jan 12 16:59:56 2016 Subject: [Seaside] OpenID/OAuth implementation? In-Reply-To: <70C74295-4834-49E5-8CE6-0B9CACA9D4C8@gmail.com> References: <70C74295-4834-49E5-8CE6-0B9CACA9D4C8@gmail.com> Message-ID: Hello Esteban, Yes, there is Zinc-SSO [1]. This library supports OAuth 1&2, OpenID and OpenID Connect. OpenID Connect is currently the most popular single sign on solution on the web. You can use the library with just the Zinc web server or in combination with Seaside. [1] https://github.com/svenvc/docs/blob/master/zinc/zinc-sso-paper.md On Tue, Jan 12, 2016 at 4:44 PM, Esteban Lorenzano wrote: > Hi, > > With the future shutdown of mozilla persona [1], I started to dig onto > possibilities? and honestly I do not want to code > yet-another-authentication-component? > So I wonder? does someone implemented something like OpenID or OAuth and > is available? > > thanks, > Esteban > > > [1] https://mail.mozilla.org/pipermail/persona-notices/2016/000005.html > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160112/d11832a4/attachment.htm From cj-bachinger at gmx.de Thu Jan 14 10:42:26 2016 From: cj-bachinger at gmx.de (Christoph J. Bachinger) Date: Thu Jan 14 10:42:37 2016 Subject: [Seaside] Blog component or framework available Message-ID: <56977B92.6090203@gmx.de> Hello, I am looking for a Blog component or framework in Seaside. Any hints? best regards cjb From kuszinger at giscom.hu Thu Jan 14 14:34:30 2016 From: kuszinger at giscom.hu (Robert Kuszinger) Date: Thu Jan 14 14:34:42 2016 Subject: [Seaside] Upload large file - my results and questions Message-ID: Hello! I need to work out a big file uploader. Files could be around ~300Mb. *Toolchain:* - OS: Linux kuszidell 4.3.3-2-ARCH #1 SMP PREEMPT Wed Dec 23 20:09:18 CET 2015 x86_64 GNU/Linux running Pharo - Pharo4.0, Latest update: #40626 - Loaded config: KomHttpServer (HernanMoralesDurand.25) - Loaded config: Seaside3 (topa:278) - Seaside Adaptor: WAComancheAdaptor on 8080 *Results:* - When working locally, even 153 MB file is uploaded in seconds. - Pharo grows in memory with each upload (haven't seen any degradation yet). This is also a situation when uloading locally (localhost:8080) - Received file (ByteArray) is held in memory when receiving (checked with halt/inspect). However, when running on the *final configuration*: - IIS7.5 on Windows Server 2008R2 as reverse proxy with ARR and Rewrite IIS "plugins" - VPN connection from IIS box to the Linux box with Pharo, see above The following effects realized: - Upload process goes well, no error. However, sometimes the resulting file wouldn't appear in the target directory (??). No error in Pharo as well. Disappears (hmm?). *Questions:* - are there* any timeout settings* in Seaside and/or in Comanche? Especially on transfers, etc. Session timeout could be short, if any... The main difference between the local and final config/setup is the network/transfer speed (VPN is very slow). - Is there an *option for streaming directly the uploaded file* into a target file? *Uploading "software" is as easy as this:* *renderContentOn:* html html form multipart; with: [ html fileUpload callback: [ :value | self receiveFile: value ]. html submitButton: 'Upload' ]. *receiveFile:* aFile | stream | stream := (FileDirectory default directoryNamed: '/home/kuszi/tmp') assureExistence; forceNewFileNamed: aFile fileName. [ stream binary; nextPutAll: aFile rawContents ] ensure: [ stream close ] Thanks and best regards Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160114/f8919631/attachment.htm From bsselfridge at gmail.com Thu Jan 14 14:54:29 2016 From: bsselfridge at gmail.com (Brad Selfridge) Date: Thu Jan 14 15:15:58 2016 Subject: [Seaside] Component vs Application Confusion Message-ID: <1452783269930-4871327.post@n4.nabble.com> I'm trying to work my way through building a Seaside web app, (for learning purposes), and I'm confused. I have several questions that don't seem to be explained well in any documentation (that I can find). I see in the documentation and code examples where you register each component that basically represents and new URI on the address bar and that component is becomes an application. So: 1. If I have components that I "call" or "show", do I have to or need to register those components? 2. If I add libraries to a "application component" and then call another component (assuming the called component is NOT an application"), the called component does not seem to pick up those libraries from the calling application component. Is this proper behavior? If not, then how does one get the called component to pickup those libraries? ----- Brad Selfridge -- View this message in context: http://forum.world.st/Component-vs-Application-Confusion-tp4871327.html Sent from the Seaside General mailing list archive at Nabble.com. From jtuchel at objektfabrik.de Thu Jan 14 19:27:34 2016 From: jtuchel at objektfabrik.de (jtuchel@objektfabrik.de) Date: Thu Jan 14 19:27:38 2016 Subject: [Seaside] Component vs Application Confusion In-Reply-To: <1452783269930-4871327.post@n4.nabble.com> References: <1452783269930-4871327.post@n4.nabble.com> Message-ID: <5697F6A6.4030906@objektfabrik.de> Brad, you only register a WAApplication for each "entry point" or url that will be accessed. In that Application you declare the initial component to be rendered whenever somebody visits this url. Everything else is simple call:/answer: or show: between components and/or tasks. In many or maybe even most cases you really just register one Application and have quiet a few components. Components that get called don't need to be registered anywhere. If you want to provide a set of "files", like CSS, or JS, you also register one or more WAFileLibraries with your Application. Whatever this FileLibrary provieds will be added to each and every page in your Application. Does that help? What documentation have you looked at so far? Joachim Am 14.01.16 um 15:54 schrieb Brad Selfridge: > I'm trying to work my way through building a Seaside web app, (for learning > purposes), and I'm confused. > > I have several questions that don't seem to be explained well in any > documentation (that I can find). > > I see in the documentation and code examples where you register each > component that basically represents and new URI on the address bar and that > component is becomes an application. So: > > 1. If I have components that I "call" or "show", do I have to or need to > register those components? > > 2. If I add libraries to a "application component" and then call another > component (assuming the called component is NOT an application"), the called > component does not seem to pick up those libraries from the calling > application component. Is this proper behavior? If not, then how does one > get the called component to pickup those libraries? > > > > > > ----- > Brad Selfridge > -- > View this message in context: http://forum.world.st/Component-vs-Application-Confusion-tp4871327.html > Sent from the Seaside General mailing list archive at Nabble.com. > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > -- ----------------------------------------------------------------------- Objektfabrik Joachim Tuchel mailto:jtuchel@objektfabrik.de Fliederweg 1 http://www.objektfabrik.de D-71640 Ludwigsburg http://joachimtuchel.wordpress.com Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1 From stephan at stack.nl Thu Jan 14 19:45:08 2016 From: stephan at stack.nl (Stephan Eggermont) Date: Thu Jan 14 19:45:17 2016 Subject: [Seaside] Re: Upload large file - my results and questions In-Reply-To: References: Message-ID: On 14-01-16 15:34, Robert Kuszinger wrote: > * Is there an *option for streaming directly the uploaded file* into a > target file? Zinc supports streaming downloads. For larger downloads you do not want to use the default seaside fileupload, as other request will not be handled while the upload is progressing. It would be better to let the IIS handle this. I've only done that with nginx. There used to be some blog posts by nick ager about that. You might want to mail him about that. Stephan From stephan at stack.nl Thu Jan 14 19:46:18 2016 From: stephan at stack.nl (Stephan Eggermont) Date: Thu Jan 14 19:50:05 2016 Subject: [Seaside] Re: Blog component or framework available In-Reply-To: <56977B92.6090203@gmx.de> References: <56977B92.6090203@gmx.de> Message-ID: On 14-01-16 11:42, Christoph J. Bachinger wrote: > Hello, > > I am looking for a Blog component or framework in Seaside. > Any hints? Pier3 is a bit large but has a blog. Stephan From bsselfridge at gmail.com Fri Jan 15 14:21:33 2016 From: bsselfridge at gmail.com (Brad Selfridge) Date: Fri Jan 15 14:43:09 2016 Subject: [Seaside] Re: Component vs Application Confusion In-Reply-To: <5697F6A6.4030906@objektfabrik.de> References: <1452783269930-4871327.post@n4.nabble.com> <5697F6A6.4030906@objektfabrik.de> Message-ID: <1452867693344-4871526.post@n4.nabble.com> Thank you. That does help explain very well - I think I get it now. I've dug my way through the Dynamic Web Development with Seaside, Seaside Tutorials, and the Hasso Plattner Institut - Seaside Tutorial. They all cover the concept of registering a component as an application, but I never picked up the relationship between an application component and a call/show component. Coming from a PHP experience, where every page is URL centric, I was having a hard time making the transition. ----- Brad Selfridge -- View this message in context: http://forum.world.st/Component-vs-Application-Confusion-tp4871327p4871526.html Sent from the Seaside General mailing list archive at Nabble.com. From Das.Linux at gmx.de Fri Jan 15 20:28:26 2016 From: Das.Linux at gmx.de (Tobias Pape) Date: Fri Jan 15 20:28:29 2016 Subject: [Seaside] [OT] ISP issue Message-ID: Dear all, due to a configuration issue on the mailing list server, a German ISP had refused messages from this list since mid December. These issues should now be resolved. I apologize should there have been missed messages or other inconveniences due to this. Best regards -Tobias From kuszinger at giscom.hu Sun Jan 17 18:03:34 2016 From: kuszinger at giscom.hu (Robert Kuszinger) Date: Sun Jan 17 18:03:45 2016 Subject: [Seaside] missing method (Grease - Seaside?) - invalidUtf8 Message-ID: Hello! I have a form, having a submint button titled 'Bel?p?s' which is a byteString in the source code. From debug I know that it has problem with the '?' WAComancheAdaptor is used, set to UTF-8 Form has two text fields, one text, one password. *The error is here:* Internal Server ErrorMessageNotUnderstood: GRPharoUtf8Codec>>invalidUtf8 ------------------------------ KomHttpServer/7.1.3 (unix) Server at 'localhost' Port 8080 Actually there is really no *invalidUtf8* method... Same in Chrome and Firefox. *Generated source from the browser:* Seaside *[....]*
E-mail:
Jelsz?:
*What else to check? I don't understand what the problem is.* Thanks Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160117/06d24db4/attachment.htm From johan at inceptive.be Mon Jan 18 07:31:08 2016 From: johan at inceptive.be (Johan Brichau) Date: Mon Jan 18 07:31:13 2016 Subject: [Seaside] missing method (Grease - Seaside?) - invalidUtf8 In-Reply-To: References: Message-ID: <875AE073-21F9-4A93-83F1-5312411D7D60@inceptive.be> Hi Robert, The missing method is indeed a mistake in the GRPharoUtf8Codec. I added it and published Grease 1.2.6 with the bugfix. However, you will now get the ?Invalid utf8? error instead, which is not solving your problem. The issue is that the decoder is trying to decode an invalid utf8 bytesequence. You need to check where that is coming from. Set the WADebugErrorHandler as the exception handler for your application to debug which one it is. Johan > On 17 Jan 2016, at 19:03, Robert Kuszinger wrote: > > Hello! > > I have a form, having a submint button titled 'Bel?p?s' which is a byteString in the source code. From debug I know that it has problem with the '?' > > WAComancheAdaptor is used, set to UTF-8 > Form has two text fields, one text, one password. > > The error is here: > > Internal Server Error > > MessageNotUnderstood: GRPharoUtf8Codec>>invalidUtf8 > KomHttpServer/7.1.3 (unix) Server at 'localhost' Port 8080 > > Actually there is really no invalidUtf8 method... > Same in Chrome and Firefox. > > Generated source from the browser: > Seaside > > [....] > >
> > > >
E-mail:
Jelsz?:
> > > What else to check? I don't understand what the problem is. > > Thanks > Robert > > > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160118/b71ce632/attachment.htm From kuszinger at giscom.hu Mon Jan 18 07:58:57 2016 From: kuszinger at giscom.hu (Robert Kuszinger) Date: Mon Jan 18 07:59:09 2016 Subject: [Seaside] missing method (Grease - Seaside?) - invalidUtf8 In-Reply-To: <875AE073-21F9-4A93-83F1-5312411D7D60@inceptive.be> References: <875AE073-21F9-4A93-83F1-5312411D7D60@inceptive.be> Message-ID: Johan, thanks for your work and also the recommendation. I'll continue as you'd mentioned. regards Robert Johan Brichau ezt ?rta (id?pont: 2016. jan. 18., H, 8:31): > Hi Robert, > > The missing method is indeed a mistake in the GRPharoUtf8Codec. I added it > and published Grease 1.2.6 with the bugfix. > However, you will now get the ?Invalid utf8? error instead, which is not > solving your problem. > > The issue is that the decoder is trying to decode an invalid utf8 > bytesequence. You need to check where that is coming from. > Set the WADebugErrorHandler as the exception handler for your application > to debug which one it is. > > Johan > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160118/826a1853/attachment-0001.htm From cj-bachinger at gmx.de Mon Jan 18 10:11:26 2016 From: cj-bachinger at gmx.de (Christoph J. Bachinger) Date: Mon Jan 18 10:11:41 2016 Subject: [Seaside] Re: Blog component or framework available In-Reply-To: References: <56977B92.6090203@gmx.de> Message-ID: <569CBA4E.5080207@gmx.de> Sorry for the delay Stephan, I believe Pier3 is to much for me. Is there any good documentation about it. Like the Pharo books. cjb Am 14.01.2016 um 20:46 schrieb Stephan Eggermont: > On 14-01-16 11:42, Christoph J. Bachinger wrote: >> Hello, >> >> I am looking for a Blog component or framework in Seaside. >> Any hints? > > Pier3 is a bit large but has a blog. > > Stephan > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > From stephan at stack.nl Mon Jan 18 11:17:19 2016 From: stephan at stack.nl (Stephan Eggermont) Date: Mon Jan 18 11:17:35 2016 Subject: [Seaside] Re: Blog component or framework available In-Reply-To: <569CBA4E.5080207@gmx.de> References: <56977B92.6090203@gmx.de> <569CBA4E.5080207@gmx.de> Message-ID: On 18-01-16 11:11, Christoph J. Bachinger wrote: > Sorry for the delay Stephan, > > I believe Pier3 is to much for me. Is there any good documentation about > it. Like the Pharo books. http://www.lukas-renggli.ch/blog http://www.piercms.com/ http://scg.unibe.ch/archive/reports/Reng07c.pdf http://scg.unibe.ch/archive/masters/Reng06a.pdf Stephan From kuszinger at giscom.hu Mon Jan 18 11:29:43 2016 From: kuszinger at giscom.hu (Robert Kuszinger) Date: Mon Jan 18 11:29:45 2016 Subject: [Seaside] Fwd: WADebugErrorHandler problem In-Reply-To: References: Message-ID: Hi Everyone, I have a "fresh" Pharo 4 image loaded with Seaside, Komhttpserver and my own app, naturally. I'm in a search of an error so I need WADebugErrorHandler. I've been reading on many forums and also advised to set the following in my application configuration: *----- Configure WAExceptionFilter here:* *Filters* *Possible filters:* AddWAExceptionFilter [*Configure*] *[Remove]* *----- And then set WADebugErrorHandler here:* *Application: /upload* *Inherited Configuration* *Possible parents:* *[ Add ]* WA...whatever..... :) *Assigned parents:* WAExceptionFilterConfiguration [Configure] *General* *Exception Handler* WADebugErrorHandler *[Clear]* *[OK] [APPLY] [Cancel]* *But it is not working*, I'm getting the short error message. No debug window in the image as well.... *Internal Server Error *GRInvalidUtf8Error: Invalid UTF-8 input KomHttpServer/7.1.3 (Win32) Server at 'localhost' Port 8080 *What else to check?* Thanks Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160118/4b91c4be/attachment.htm From sven at stfx.eu Mon Jan 18 11:45:52 2016 From: sven at stfx.eu (Sven Van Caekenberghe) Date: Mon Jan 18 11:45:54 2016 Subject: [Seaside] Fwd: WADebugErrorHandler problem In-Reply-To: References: Message-ID: <0B5B59E5-A8C6-4D95-AB8D-D61768ADEBE3@stfx.eu> Robert, If you are using Pharo, it seems more logical that you would use ZnZincServerAdaptor. But of course, it might not necessarily have to do with the adaptor. HTH, Sven > On 18 Jan 2016, at 12:29, Robert Kuszinger wrote: > > > > Hi Everyone, > > > > I have a "fresh" Pharo 4 image loaded with Seaside, Komhttpserver and my own app, naturally. I'm in a search of an error so I need WADebugErrorHandler. I've been reading on many forums and also advised to set the following in my application configuration: > > > > ----- Configure WAExceptionFilter here: > > Filters > > Possible filters: > > AddWAExceptionFilter [Configure] [Remove] > > > > > ----- And then set WADebugErrorHandler here: > > Application: /upload > > Inherited Configuration > > Possible parents: [ Add ] > > WA...whatever..... :) > > Assigned parents: > > WAExceptionFilterConfiguration > > [Configure] > > General > > Exception Handler WADebugErrorHandler [Clear] > > [OK] [APPLY] [Cancel] > > > > > But it is not working, I'm getting the short error message. No debug window in the image as well.... > > Internal Server Error > GRInvalidUtf8Error: Invalid UTF-8 input > KomHttpServer/7.1.3 (Win32) Server at 'localhost' Port 8080 > > > > What else to check? > > > > Thanks > > Robert > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From kuszinger at giscom.hu Mon Jan 18 12:17:05 2016 From: kuszinger at giscom.hu (Robert Kuszinger) Date: Mon Jan 18 12:17:06 2016 Subject: [Seaside] ZINC - Kom dilemma Fwd: WADebugErrorHandler problem Message-ID: Sven, thanks for the tip. Actually the UTF problem won't appear with Zinc but I also need to do large file uploads. Zinc resets connection somewhere between 10-19 MB File size. Is there a built-in limitation? Anyway, an idea on how to break through this limit may also help. I need to upload 150-300 MB files regulary with this service. KomHttp did it out-of-the-box. thanks Robert 2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe : > Robert, > > If you are using Pharo, it seems more logical that you would use > ZnZincServerAdaptor. > > But of course, it might not necessarily have to do with the adaptor. > > HTH, > > Sven > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160118/0dc9524b/attachment.htm From sven at stfx.eu Mon Jan 18 12:27:30 2016 From: sven at stfx.eu (Sven Van Caekenberghe) Date: Mon Jan 18 12:27:33 2016 Subject: [Seaside] ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: References: Message-ID: > On 18 Jan 2016, at 13:17, Robert Kuszinger wrote: > > Sven, > > thanks for the tip. Actually the UTF problem won't appear with Zinc but I also need to do large file uploads. Zinc resets connection somewhere between 10-19 MB File size. Is there a built-in limitation? Yes, one of several used by Zn to protect itself from resource abuse (DOS). The default limit is 16Mb. > Anyway, an idea on how to break through this limit may also help. I need to upload 150-300 MB files regulary with this service. KomHttp did it out-of-the-box. The limit you are running into is called #maximumEntitySize and can be set with ZnServer>>#maximumEntitySize: - The default limit is 16Mb. Given an adaptor instance, you can access the server using #server. So together this would be ZnZincServerAdaptor default server maximumEntitySize: 300*1024*1024. Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory. Let me know how that goes. > thanks > Robert > > > 2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe : > Robert, > > If you are using Pharo, it seems more logical that you would use ZnZincServerAdaptor. > > But of course, it might not necessarily have to do with the adaptor. > > HTH, > > Sven > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From kuszinger at giscom.hu Mon Jan 18 12:39:29 2016 From: kuszinger at giscom.hu (Robert Kuszinger) Date: Mon Jan 18 12:39:32 2016 Subject: [Seaside] ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: References: Message-ID: Sven, thanks for the comments. I understand all. Could you please clarify this: "*Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory*." Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation? Answering on how it goes: 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "*Space is low*" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm... thanks Robert 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe : > > > On 18 Jan 2016, at 13:17, Robert Kuszinger wrote: > > > > Sven, > > > > thanks for the tip. Actually the UTF problem won't appear with Zinc but > I also need to do large file uploads. Zinc resets connection somewhere > between 10-19 MB File size. Is there a built-in limitation? > > Yes, one of several used by Zn to protect itself from resource abuse > (DOS). The default limit is 16Mb. > > > Anyway, an idea on how to break through this limit may also help. I need > to upload 150-300 MB files regulary with this service. KomHttp did it > out-of-the-box. > > The limit you are running into is called #maximumEntitySize and can be set > with ZnServer>>#maximumEntitySize: - The default limit is 16Mb. > > Given an adaptor instance, you can access the server using #server. > > So together this would be > > ZnZincServerAdaptor default server maximumEntitySize: 300*1024*1024. > > Technically, it would be possible to write a Zn handler that can accept a > large upload in a streaming fashion (and save it to a file for example), > but I don't think that will happen with Seaside - so you will pull that all > in memory. > > Let me know how that goes. > > > thanks > > Robert > > > > > > 2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe : > > Robert, > > > > If you are using Pharo, it seems more logical that you would use > ZnZincServerAdaptor. > > > > But of course, it might not necessarily have to do with the adaptor. > > > > HTH, > > > > Sven > > > > _______________________________________________ > > seaside mailing list > > seaside@lists.squeakfoundation.org > > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160118/f5766cbf/attachment.htm From sven at stfx.eu Mon Jan 18 15:30:47 2016 From: sven at stfx.eu (Sven Van Caekenberghe) Date: Mon Jan 18 15:30:45 2016 Subject: [Seaside] ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: References: Message-ID: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> Robert, This is not such an easy problem, you have to really understand HTTP. BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ? Now, here is the key idea (pure Zn, no Seaside, quick hack): (ZnServer startOn: 1701) reader: [ :stream | ZnRequest readStreamingFrom: stream ]; maximumEntitySize: 100*1024*1024; onRequestRespond: [ :req | '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | out binary. ZnUtils streamFrom: req entity stream to: out ]. ZnResponse ok: (ZnEntity text: 'done') ]; yourself. You would use it like this: $ echo one two three > data.bin $ curl -X POST -d @data.bin http://localhost:1701 $ cat /tmp/upload.bin one two three With a 1Mb data file generated from Pharo: '/tmp/data.txt' asFileReference writeStreamDo: [ :out | 1 * 1024 timesRepeat: [ 1 to: 32 do: [ :each | out << Character alphabet << (each printStringPadded: 5); lf ] ] ] $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 * Rebuilt URL to: http://localhost:1701/ * Trying ::1... * connect to ::1 port 1701 failed: Connection refused * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 1701 (#0) > POST / HTTP/1.1 > Host: localhost:1701 > User-Agent: curl/7.43.0 > Accept: */* > Content-Length: 1048576 > Content-Type: application/x-www-form-urlencoded > Expect: 100-continue > * Done waiting for 100-continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Type: text/plain;charset=utf-8 < Content-Length: 4 < Date: Mon, 18 Jan 2016 14:56:53 GMT < Server: Zinc HTTP Components 1.0 < * Connection #0 to host localhost left intact done $ diff data2.bin /tmp/upload.bin This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly. Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside. I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue. That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ? Maybe all that is needed is giving Pharo more memory. What platform are you on ? Sven > On 18 Jan 2016, at 13:39, Robert Kuszinger wrote: > > Sven, > > thanks for the comments. I understand all. > > Could you please clarify this: > > "Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory." > > Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation? > > > Answering on how it goes: > > 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm... > > > thanks > Robert > > > > > 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe : > > > On 18 Jan 2016, at 13:17, Robert Kuszinger wrote: > > > > Sven, > > > > thanks for the tip. Actually the UTF problem won't appear with Zinc but I also need to do large file uploads. Zinc resets connection somewhere between 10-19 MB File size. Is there a built-in limitation? > > Yes, one of several used by Zn to protect itself from resource abuse (DOS). The default limit is 16Mb. > > > Anyway, an idea on how to break through this limit may also help. I need to upload 150-300 MB files regulary with this service. KomHttp did it out-of-the-box. > > The limit you are running into is called #maximumEntitySize and can be set with ZnServer>>#maximumEntitySize: - The default limit is 16Mb. > > Given an adaptor instance, you can access the server using #server. > > So together this would be > > ZnZincServerAdaptor default server maximumEntitySize: 300*1024*1024. > > Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory. > > Let me know how that goes. > > > thanks > > Robert > > > > > > 2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe : > > Robert, > > > > If you are using Pharo, it seems more logical that you would use ZnZincServerAdaptor. > > > > But of course, it might not necessarily have to do with the adaptor. > > > > HTH, > > > > Sven > > > > _______________________________________________ > > seaside mailing list > > seaside@lists.squeakfoundation.org > > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From kuszinger at giscom.hu Mon Jan 18 17:00:46 2016 From: kuszinger at giscom.hu (Robert Kuszinger) Date: Mon Jan 18 17:00:58 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> Message-ID: Sven, thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content... *Usage*: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it. No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads. Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely. I would be great to create the upload receiving part also with Pharo at least. All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5. I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.) This is the scenario. Robert Sven Van Caekenberghe ezt ?rta (id?pont: 2016. jan. 18., H, 16:30): > Robert, > > This is not such an easy problem, you have to really understand HTTP. > > BTW, such huge uploads don't seem a very good idea anyway, you will get > annoying timeouts as well. I am curious, what is in those files ? > > Now, here is the key idea (pure Zn, no Seaside, quick hack): > > (ZnServer startOn: 1701) > reader: [ :stream | ZnRequest readStreamingFrom: stream ]; > maximumEntitySize: 100*1024*1024; > onRequestRespond: [ :req | > '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | > out binary. > ZnUtils streamFrom: req entity stream to: out ]. > ZnResponse ok: (ZnEntity text: 'done') ]; > yourself. > > You would use it like this: > > $ echo one two three > data.bin > $ curl -X POST -d @data.bin http://localhost:1701 > $ cat /tmp/upload.bin > one two three > > With a 1Mb data file generated from Pharo: > > '/tmp/data.txt' asFileReference writeStreamDo: [ :out | > 1 * 1024 timesRepeat: [ > 1 to: 32 do: [ :each | > out << Character alphabet << (each printStringPadded: 5); lf ] ] ] > > $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 > * Rebuilt URL to: http://localhost:1701/ > * Trying ::1... > * connect to ::1 port 1701 failed: Connection refused > * Trying 127.0.0.1... > * Connected to localhost (127.0.0.1) port 1701 (#0) > > POST / HTTP/1.1 > > Host: localhost:1701 > > User-Agent: curl/7.43.0 > > Accept: */* > > Content-Length: 1048576 > > Content-Type: application/x-www-form-urlencoded > > Expect: 100-continue > > > * Done waiting for 100-continue > * We are completely uploaded and fine > < HTTP/1.1 200 OK > < Content-Type: text/plain;charset=utf-8 > < Content-Length: 4 > < Date: Mon, 18 Jan 2016 14:56:53 GMT > < Server: Zinc HTTP Components 1.0 > < > * Connection #0 to host localhost left intact > done > > $ diff data2.bin /tmp/upload.bin > > This code is totally incomplete, you need lots of error handling. > Furthermore, working with streaming requests is dangerous, because you are > responsible for reading the bodies correctly. > > Also, if you want an upload in a form, you will have to parse that form > (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you > will then again take it in memory. These things are normally done for you > by Zn and/or Seaside. > > I also tried with 100Mb, it worked, but it took several minutes, like 10 > to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too > small for this use case. Maybe curl doesn't upload very aggressively. > Performance is another issue. > > That is why I asked what is in the files, what you eventually want to do > with it. Is the next processing step in Pharo too ? > > Maybe all that is needed is giving Pharo more memory. What platform are > you on ? > > Sven > > > On 18 Jan 2016, at 13:39, Robert Kuszinger wrote: > > > > Sven, > > > > thanks for the comments. I understand all. > > > > Could you please clarify this: > > > > "Technically, it would be possible to write a Zn handler that can accept > a large upload in a streaming fashion (and save it to a file for example), > but I don't think that will happen with Seaside - so you will pull that all > in memory." > > > > Is there a chance to create a streaming solution? Is it documented > somewhere? Why do you think it won't happen with Seaside? Is there a > Seaside set or design limitation? > > > > > > Answering on how it goes: > > > > 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB > upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" > warning window appeared. I've clicked on "Proceed" just for curiosity but > no reaction in the Pharo gui... hmmm... > > > > > > thanks > > Robert > > > > > > > > > > 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe : > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160118/d687dd4b/attachment-0001.htm From Das.Linux at gmx.de Mon Jan 18 17:56:51 2016 From: Das.Linux at gmx.de (Tobias Pape) Date: Mon Jan 18 17:56:55 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> Message-ID: Hey all just my 2ct while skimming the thread. I have upload problems with my seaside app and plan to tackle them by utilizing the reverse proxy. In my scenario, that is nginx, wich ships an "upload module" https://www.nginx.com/resources/wiki/modules/upload/ Given that, the upload is handled by the reverse proxy and only when the file is already on the file system, the backend (seaside in this case) would get a notification request. I plan to implement this within the next 6 weeks, so if I get going something usable, I'll probably hand it back to the seaside community :) Remind me if I forget ;) best regards -Tobias On 18.01.2016, at 18:00, Robert Kuszinger wrote: > > Sven, > > thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content... > > Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it. > > No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads. > > Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely. > > I would be great to create the upload receiving part also with Pharo at least. > > All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5. > > I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.) > > This is the scenario. > > Robert > > > > Sven Van Caekenberghe ezt ?rta (id?pont: 2016. jan. 18., H, 16:30): > Robert, > > This is not such an easy problem, you have to really understand HTTP. > > BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ? > > Now, here is the key idea (pure Zn, no Seaside, quick hack): > > (ZnServer startOn: 1701) > reader: [ :stream | ZnRequest readStreamingFrom: stream ]; > maximumEntitySize: 100*1024*1024; > onRequestRespond: [ :req | > '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | > out binary. > ZnUtils streamFrom: req entity stream to: out ]. > ZnResponse ok: (ZnEntity text: 'done') ]; > yourself. > > You would use it like this: > > $ echo one two three > data.bin > $ curl -X POST -d @data.bin http://localhost:1701 > $ cat /tmp/upload.bin > one two three > > With a 1Mb data file generated from Pharo: > > '/tmp/data.txt' asFileReference writeStreamDo: [ :out | > 1 * 1024 timesRepeat: [ > 1 to: 32 do: [ :each | > out << Character alphabet << (each printStringPadded: 5); lf ] ] ] > > $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 > * Rebuilt URL to: http://localhost:1701/ > * Trying ::1... > * connect to ::1 port 1701 failed: Connection refused > * Trying 127.0.0.1... > * Connected to localhost (127.0.0.1) port 1701 (#0) > > POST / HTTP/1.1 > > Host: localhost:1701 > > User-Agent: curl/7.43.0 > > Accept: */* > > Content-Length: 1048576 > > Content-Type: application/x-www-form-urlencoded > > Expect: 100-continue > > > * Done waiting for 100-continue > * We are completely uploaded and fine > < HTTP/1.1 200 OK > < Content-Type: text/plain;charset=utf-8 > < Content-Length: 4 > < Date: Mon, 18 Jan 2016 14:56:53 GMT > < Server: Zinc HTTP Components 1.0 > < > * Connection #0 to host localhost left intact > done > > $ diff data2.bin /tmp/upload.bin > > This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly. > > Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside. > > I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue. > > That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ? > > Maybe all that is needed is giving Pharo more memory. What platform are you on ? > > Sven > > > On 18 Jan 2016, at 13:39, Robert Kuszinger wrote: > > > > Sven, > > > > thanks for the comments. I understand all. > > > > Could you please clarify this: > > > > "Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory." > > > > Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation? > > > > > > Answering on how it goes: > > > > 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm... > > > > > > thanks > > Robert > > > > > > > > > > 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe : > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From sven at stfx.eu Mon Jan 18 18:16:01 2016 From: sven at stfx.eu (Sven Van Caekenberghe) Date: Mon Jan 18 18:16:06 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> Message-ID: <0A06E591-1F0E-4662-8AB4-AE9C90D9DC02@stfx.eu> I quickly read through the docs of the nginx module: that looks like a very good solution. Is the plugin good and available for the open source version of nginx ? > On 18 Jan 2016, at 18:56, Tobias Pape wrote: > > Hey all > > just my 2ct while skimming the thread. > > I have upload problems with my seaside app > and plan to tackle them by utilizing the reverse proxy. > In my scenario, that is nginx, wich ships an "upload module" > https://www.nginx.com/resources/wiki/modules/upload/ > > Given that, the upload is handled by the reverse proxy and only when > the file is already on the file system, the backend (seaside in this case) > would get a notification request. > > I plan to implement this within the next 6 weeks, so if I get going something > usable, I'll probably hand it back to the seaside community :) > Remind me if I forget ;) > > best regards > -Tobias > > > On 18.01.2016, at 18:00, Robert Kuszinger wrote: > >> >> Sven, >> >> thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content... >> >> Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it. >> >> No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads. >> >> Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely. >> >> I would be great to create the upload receiving part also with Pharo at least. >> >> All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5. >> >> I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.) >> >> This is the scenario. >> >> Robert >> >> >> >> Sven Van Caekenberghe ezt ?rta (id?pont: 2016. jan. 18., H, 16:30): >> Robert, >> >> This is not such an easy problem, you have to really understand HTTP. >> >> BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ? >> >> Now, here is the key idea (pure Zn, no Seaside, quick hack): >> >> (ZnServer startOn: 1701) >> reader: [ :stream | ZnRequest readStreamingFrom: stream ]; >> maximumEntitySize: 100*1024*1024; >> onRequestRespond: [ :req | >> '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | >> out binary. >> ZnUtils streamFrom: req entity stream to: out ]. >> ZnResponse ok: (ZnEntity text: 'done') ]; >> yourself. >> >> You would use it like this: >> >> $ echo one two three > data.bin >> $ curl -X POST -d @data.bin http://localhost:1701 >> $ cat /tmp/upload.bin >> one two three >> >> With a 1Mb data file generated from Pharo: >> >> '/tmp/data.txt' asFileReference writeStreamDo: [ :out | >> 1 * 1024 timesRepeat: [ >> 1 to: 32 do: [ :each | >> out << Character alphabet << (each printStringPadded: 5); lf ] ] ] >> >> $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 >> * Rebuilt URL to: http://localhost:1701/ >> * Trying ::1... >> * connect to ::1 port 1701 failed: Connection refused >> * Trying 127.0.0.1... >> * Connected to localhost (127.0.0.1) port 1701 (#0) >>> POST / HTTP/1.1 >>> Host: localhost:1701 >>> User-Agent: curl/7.43.0 >>> Accept: */* >>> Content-Length: 1048576 >>> Content-Type: application/x-www-form-urlencoded >>> Expect: 100-continue >>> >> * Done waiting for 100-continue >> * We are completely uploaded and fine >> < HTTP/1.1 200 OK >> < Content-Type: text/plain;charset=utf-8 >> < Content-Length: 4 >> < Date: Mon, 18 Jan 2016 14:56:53 GMT >> < Server: Zinc HTTP Components 1.0 >> < >> * Connection #0 to host localhost left intact >> done >> >> $ diff data2.bin /tmp/upload.bin >> >> This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly. >> >> Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside. >> >> I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue. >> >> That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ? >> >> Maybe all that is needed is giving Pharo more memory. What platform are you on ? >> >> Sven >> >>> On 18 Jan 2016, at 13:39, Robert Kuszinger wrote: >>> >>> Sven, >>> >>> thanks for the comments. I understand all. >>> >>> Could you please clarify this: >>> >>> "Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory." >>> >>> Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation? >>> >>> >>> Answering on how it goes: >>> >>> 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm... >>> >>> >>> thanks >>> Robert >>> >>> >>> >>> >>> 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe : >>> >> _______________________________________________ >> seaside mailing list >> seaside@lists.squeakfoundation.org >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From johan at inceptive.be Mon Jan 18 18:18:02 2016 From: johan at inceptive.be (Johan Brichau) Date: Mon Jan 18 18:18:06 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> Message-ID: <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> Hi Tobias, This is what we do since years :) There was a blog post online describing all the details but I don?t find it anymore. The only think I can find is Nick Ager?s reply when we had a little trouble setting it up [1] I might try to separate off the code to spare you some time. It?s quite simple, actually. [1] http://forum.world.st/Using-nginx-file-upload-module-td3591666.html Johan > On 18 Jan 2016, at 18:56, Tobias Pape wrote: > > Hey all > > just my 2ct while skimming the thread. > > I have upload problems with my seaside app > and plan to tackle them by utilizing the reverse proxy. > In my scenario, that is nginx, wich ships an "upload module" > https://www.nginx.com/resources/wiki/modules/upload/ > > Given that, the upload is handled by the reverse proxy and only when > the file is already on the file system, the backend (seaside in this case) > would get a notification request. > > I plan to implement this within the next 6 weeks, so if I get going something > usable, I'll probably hand it back to the seaside community :) > Remind me if I forget ;) > > best regards > -Tobias > > > On 18.01.2016, at 18:00, Robert Kuszinger wrote: > >> >> Sven, >> >> thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content... >> >> Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it. >> >> No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads. >> >> Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely. >> >> I would be great to create the upload receiving part also with Pharo at least. >> >> All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5. >> >> I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.) >> >> This is the scenario. >> >> Robert >> >> >> >> Sven Van Caekenberghe ezt ?rta (id?pont: 2016. jan. 18., H, 16:30): >> Robert, >> >> This is not such an easy problem, you have to really understand HTTP. >> >> BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ? >> >> Now, here is the key idea (pure Zn, no Seaside, quick hack): >> >> (ZnServer startOn: 1701) >> reader: [ :stream | ZnRequest readStreamingFrom: stream ]; >> maximumEntitySize: 100*1024*1024; >> onRequestRespond: [ :req | >> '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | >> out binary. >> ZnUtils streamFrom: req entity stream to: out ]. >> ZnResponse ok: (ZnEntity text: 'done') ]; >> yourself. >> >> You would use it like this: >> >> $ echo one two three > data.bin >> $ curl -X POST -d @data.bin http://localhost:1701 >> $ cat /tmp/upload.bin >> one two three >> >> With a 1Mb data file generated from Pharo: >> >> '/tmp/data.txt' asFileReference writeStreamDo: [ :out | >> 1 * 1024 timesRepeat: [ >> 1 to: 32 do: [ :each | >> out << Character alphabet << (each printStringPadded: 5); lf ] ] ] >> >> $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 >> * Rebuilt URL to: http://localhost:1701/ >> * Trying ::1... >> * connect to ::1 port 1701 failed: Connection refused >> * Trying 127.0.0.1... >> * Connected to localhost (127.0.0.1) port 1701 (#0) >>> POST / HTTP/1.1 >>> Host: localhost:1701 >>> User-Agent: curl/7.43.0 >>> Accept: */* >>> Content-Length: 1048576 >>> Content-Type: application/x-www-form-urlencoded >>> Expect: 100-continue >>> >> * Done waiting for 100-continue >> * We are completely uploaded and fine >> < HTTP/1.1 200 OK >> < Content-Type: text/plain;charset=utf-8 >> < Content-Length: 4 >> < Date: Mon, 18 Jan 2016 14:56:53 GMT >> < Server: Zinc HTTP Components 1.0 >> < >> * Connection #0 to host localhost left intact >> done >> >> $ diff data2.bin /tmp/upload.bin >> >> This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly. >> >> Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside. >> >> I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue. >> >> That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ? >> >> Maybe all that is needed is giving Pharo more memory. What platform are you on ? >> >> Sven >> >>> On 18 Jan 2016, at 13:39, Robert Kuszinger wrote: >>> >>> Sven, >>> >>> thanks for the comments. I understand all. >>> >>> Could you please clarify this: >>> >>> "Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory." >>> >>> Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation? >>> >>> >>> Answering on how it goes: >>> >>> 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm... >>> >>> >>> thanks >>> Robert >>> >>> >>> >>> >>> 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe : >>> >> _______________________________________________ >> seaside mailing list >> seaside@lists.squeakfoundation.org >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From Das.Linux at gmx.de Mon Jan 18 18:19:56 2016 From: Das.Linux at gmx.de (Tobias Pape) Date: Mon Jan 18 18:19:59 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> Message-ID: Hi Johan On 18.01.2016, at 19:18, Johan Brichau wrote: > Hi Tobias, > > This is what we do since years :) > There was a blog post online describing all the details but I don?t find it anymore. > The only think I can find is Nick Ager?s reply when we had a little trouble setting it up [1] > > I might try to separate off the code to spare you some time. > It?s quite simple, actually. Oh that would be just great. My students would rejoice to no longer see the spurious "503 gateway timed out" messaged ;) Best regards -Tobias > > [1] http://forum.world.st/Using-nginx-file-upload-module-td3591666.html > > Johan > >> On 18 Jan 2016, at 18:56, Tobias Pape wrote: >> >> Hey all >> >> just my 2ct while skimming the thread. >> >> I have upload problems with my seaside app >> and plan to tackle them by utilizing the reverse proxy. >> In my scenario, that is nginx, wich ships an "upload module" >> https://www.nginx.com/resources/wiki/modules/upload/ >> >> Given that, the upload is handled by the reverse proxy and only when >> the file is already on the file system, the backend (seaside in this case) >> would get a notification request. >> >> I plan to implement this within the next 6 weeks, so if I get going something >> usable, I'll probably hand it back to the seaside community :) >> Remind me if I forget ;) >> >> best regards >> -Tobias >> >> >> On 18.01.2016, at 18:00, Robert Kuszinger wrote: >> >>> >>> Sven, >>> >>> thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content... >>> >>> Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it. >>> >>> No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads. >>> >>> Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely. >>> >>> I would be great to create the upload receiving part also with Pharo at least. >>> >>> All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5. >>> >>> I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.) >>> >>> This is the scenario. >>> >>> Robert >>> >>> >>> >>> Sven Van Caekenberghe ezt ?rta (id?pont: 2016. jan. 18., H, 16:30): >>> Robert, >>> >>> This is not such an easy problem, you have to really understand HTTP. >>> >>> BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ? >>> >>> Now, here is the key idea (pure Zn, no Seaside, quick hack): >>> >>> (ZnServer startOn: 1701) >>> reader: [ :stream | ZnRequest readStreamingFrom: stream ]; >>> maximumEntitySize: 100*1024*1024; >>> onRequestRespond: [ :req | >>> '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | >>> out binary. >>> ZnUtils streamFrom: req entity stream to: out ]. >>> ZnResponse ok: (ZnEntity text: 'done') ]; >>> yourself. >>> >>> You would use it like this: >>> >>> $ echo one two three > data.bin >>> $ curl -X POST -d @data.bin http://localhost:1701 >>> $ cat /tmp/upload.bin >>> one two three >>> >>> With a 1Mb data file generated from Pharo: >>> >>> '/tmp/data.txt' asFileReference writeStreamDo: [ :out | >>> 1 * 1024 timesRepeat: [ >>> 1 to: 32 do: [ :each | >>> out << Character alphabet << (each printStringPadded: 5); lf ] ] ] >>> >>> $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 >>> * Rebuilt URL to: http://localhost:1701/ >>> * Trying ::1... >>> * connect to ::1 port 1701 failed: Connection refused >>> * Trying 127.0.0.1... >>> * Connected to localhost (127.0.0.1) port 1701 (#0) >>>> POST / HTTP/1.1 >>>> Host: localhost:1701 >>>> User-Agent: curl/7.43.0 >>>> Accept: */* >>>> Content-Length: 1048576 >>>> Content-Type: application/x-www-form-urlencoded >>>> Expect: 100-continue >>>> >>> * Done waiting for 100-continue >>> * We are completely uploaded and fine >>> < HTTP/1.1 200 OK >>> < Content-Type: text/plain;charset=utf-8 >>> < Content-Length: 4 >>> < Date: Mon, 18 Jan 2016 14:56:53 GMT >>> < Server: Zinc HTTP Components 1.0 >>> < >>> * Connection #0 to host localhost left intact >>> done >>> >>> $ diff data2.bin /tmp/upload.bin >>> >>> This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly. >>> >>> Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside. >>> >>> I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue. >>> >>> That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ? >>> >>> Maybe all that is needed is giving Pharo more memory. What platform are you on ? >>> >>> Sven >>> >>>> On 18 Jan 2016, at 13:39, Robert Kuszinger wrote: >>>> >>>> Sven, >>>> >>>> thanks for the comments. I understand all. >>>> >>>> Could you please clarify this: >>>> >>>> "Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory." >>>> >>>> Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation? >>>> >>>> >>>> Answering on how it goes: >>>> >>>> 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm... >>>> >>>> >>>> thanks >>>> Robert >>>> >>>> >>>> >>>> >>>> 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe : >>>> >>> _______________________________________________ >>> seaside mailing list >>> seaside@lists.squeakfoundation.org >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> _______________________________________________ >> seaside mailing list >> seaside@lists.squeakfoundation.org >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From kuszinger at giscom.hu Mon Jan 18 18:24:37 2016 From: kuszinger at giscom.hu (Robert Kuszinger) Date: Mon Jan 18 18:24:39 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> Message-ID: Hmmm. Putting upload one step out in the frontline.. I ask my office also about nginx and also test install for myself. Anyway, a pure pharo streaming solution is still interesting for me. R 2016.01.18. 19:20 ezt ?rta ("Tobias Pape" ): > Hi Johan > On 18.01.2016, at 19:18, Johan Brichau wrote: > > > Hi Tobias, > > > > This is what we do since years :) > > There was a blog post online describing all the details but I don?t find > it anymore. > > The only think I can find is Nick Ager?s reply when we had a little > trouble setting it up [1] > > > > I might try to separate off the code to spare you some time. > > It?s quite simple, actually. > > Oh that would be just great. My students would rejoice to no longer see > the spurious "503 gateway timed out" messaged ;) > > Best regards > -Tobias > > > > > > [1] http://forum.world.st/Using-nginx-file-upload-module-td3591666.html > > > > Johan > > > >> On 18 Jan 2016, at 18:56, Tobias Pape wrote: > >> > >> Hey all > >> > >> just my 2ct while skimming the thread. > >> > >> I have upload problems with my seaside app > >> and plan to tackle them by utilizing the reverse proxy. > >> In my scenario, that is nginx, wich ships an "upload module" > >> https://www.nginx.com/resources/wiki/modules/upload/ > >> > >> Given that, the upload is handled by the reverse proxy and only when > >> the file is already on the file system, the backend (seaside in this > case) > >> would get a notification request. > >> > >> I plan to implement this within the next 6 weeks, so if I get going > something > >> usable, I'll probably hand it back to the seaside community :) > >> Remind me if I forget ;) > >> > >> best regards > >> -Tobias > >> > >> > >> On 18.01.2016, at 18:00, Robert Kuszinger wrote: > >> > >>> > >>> Sven, > >>> > >>> thanks for the demo. Zn without Seaside is just fine if it could work. > A one-field form with only the uploaded file could work also. Some > javascript addition on client side is acceptable - I'll see then. I > understand that a simple file upload is also "composite" data with filename > and binary content... > >>> > >>> Usage: An office need to receive large map / digital survey data files > from clients. Now they post it on CD or DVD disks, however the typical > amount is 100-200 Mb in one or two or more files (depending one who has > heard about ZIP and who hasn't :) - really! ). So we are trying to create > an upload portal where they could login and then upload files to folders > where folder name contains their ID and date. That's it. > >>> > >>> No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure > browser upload as clients know this from their everyday life. And they > could also add metadata about their uploads. > >>> > >>> Login, auth to existing client database is done in Seaside/Pharo in a > few hours, works nicely. > >>> > >>> I would be great to create the upload receiving part also with Pharo > at least. > >>> > >>> All this stuff is behind and IIS/ARR - tested for large uploads, > worked well when extending timeout limitations is IIS (with Kom but eating > memory, maybe not so much as Zinc now, but it had the codepage problem I > wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS > 7.5. > >>> > >>> I'm developing on Linux and testing on Windows Server 2008 configured > to the same setup (IIS, ARR, etc.) > >>> > >>> This is the scenario. > >>> > >>> Robert > >>> > >>> > >>> > >>> Sven Van Caekenberghe ezt ?rta (id?pont: 2016. jan. > 18., H, 16:30): > >>> Robert, > >>> > >>> This is not such an easy problem, you have to really understand HTTP. > >>> > >>> BTW, such huge uploads don't seem a very good idea anyway, you will > get annoying timeouts as well. I am curious, what is in those files ? > >>> > >>> Now, here is the key idea (pure Zn, no Seaside, quick hack): > >>> > >>> (ZnServer startOn: 1701) > >>> reader: [ :stream | ZnRequest readStreamingFrom: stream ]; > >>> maximumEntitySize: 100*1024*1024; > >>> onRequestRespond: [ :req | > >>> '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | > >>> out binary. > >>> ZnUtils streamFrom: req entity stream to: out ]. > >>> ZnResponse ok: (ZnEntity text: 'done') ]; > >>> yourself. > >>> > >>> You would use it like this: > >>> > >>> $ echo one two three > data.bin > >>> $ curl -X POST -d @data.bin http://localhost:1701 > >>> $ cat /tmp/upload.bin > >>> one two three > >>> > >>> With a 1Mb data file generated from Pharo: > >>> > >>> '/tmp/data.txt' asFileReference writeStreamDo: [ :out | > >>> 1 * 1024 timesRepeat: [ > >>> 1 to: 32 do: [ :each | > >>> out << Character alphabet << (each printStringPadded: 5); lf ] ] ] > >>> > >>> $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 > >>> * Rebuilt URL to: http://localhost:1701/ > >>> * Trying ::1... > >>> * connect to ::1 port 1701 failed: Connection refused > >>> * Trying 127.0.0.1... > >>> * Connected to localhost (127.0.0.1) port 1701 (#0) > >>>> POST / HTTP/1.1 > >>>> Host: localhost:1701 > >>>> User-Agent: curl/7.43.0 > >>>> Accept: */* > >>>> Content-Length: 1048576 > >>>> Content-Type: application/x-www-form-urlencoded > >>>> Expect: 100-continue > >>>> > >>> * Done waiting for 100-continue > >>> * We are completely uploaded and fine > >>> < HTTP/1.1 200 OK > >>> < Content-Type: text/plain;charset=utf-8 > >>> < Content-Length: 4 > >>> < Date: Mon, 18 Jan 2016 14:56:53 GMT > >>> < Server: Zinc HTTP Components 1.0 > >>> < > >>> * Connection #0 to host localhost left intact > >>> done > >>> > >>> $ diff data2.bin /tmp/upload.bin > >>> > >>> This code is totally incomplete, you need lots of error handling. > Furthermore, working with streaming requests is dangerous, because you are > responsible for reading the bodies correctly. > >>> > >>> Also, if you want an upload in a form, you will have to parse that > form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), > you will then again take it in memory. These things are normally done for > you by Zn and/or Seaside. > >>> > >>> I also tried with 100Mb, it worked, but it took several minutes, like > 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably > too small for this use case. Maybe curl doesn't upload very aggressively. > Performance is another issue. > >>> > >>> That is why I asked what is in the files, what you eventually want to > do with it. Is the next processing step in Pharo too ? > >>> > >>> Maybe all that is needed is giving Pharo more memory. What platform > are you on ? > >>> > >>> Sven > >>> > >>>> On 18 Jan 2016, at 13:39, Robert Kuszinger > wrote: > >>>> > >>>> Sven, > >>>> > >>>> thanks for the comments. I understand all. > >>>> > >>>> Could you please clarify this: > >>>> > >>>> "Technically, it would be possible to write a Zn handler that can > accept a large upload in a streaming fashion (and save it to a file for > example), but I don't think that will happen with Seaside - so you will > pull that all in memory." > >>>> > >>>> Is there a chance to create a streaming solution? Is it documented > somewhere? Why do you think it won't happen with Seaside? Is there a > Seaside set or design limitation? > >>>> > >>>> > >>>> Answering on how it goes: > >>>> > >>>> 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB > upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" > warning window appeared. I've clicked on "Proceed" just for curiosity but > no reaction in the Pharo gui... hmmm... > >>>> > >>>> > >>>> thanks > >>>> Robert > >>>> > >>>> > >>>> > >>>> > >>>> 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe : > >>>> > >>> _______________________________________________ > >>> seaside mailing list > >>> seaside@lists.squeakfoundation.org > >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > >> > >> _______________________________________________ > >> seaside mailing list > >> seaside@lists.squeakfoundation.org > >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > > > _______________________________________________ > > seaside mailing list > > seaside@lists.squeakfoundation.org > > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160118/495dda87/attachment-0001.htm From johan at inceptive.be Mon Jan 18 18:24:35 2016 From: johan at inceptive.be (Johan Brichau) Date: Mon Jan 18 18:24:41 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> Message-ID: <8EDC2A54-0773-4999-8272-FAA3966177B0@inceptive.be> > On 18 Jan 2016, at 19:19, Tobias Pape wrote: > > Oh that would be just great. My students would rejoice to no longer see the spurious "503 gateway timed out" messaged ;) > > Best regards > -Tobias Are you referring to file uploads on SS3? There, the file is stored inside of Gemstone db, right? Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160118/15d9b768/attachment.htm From Das.Linux at gmx.de Mon Jan 18 18:34:43 2016 From: Das.Linux at gmx.de (Tobias Pape) Date: Mon Jan 18 18:34:47 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: <8EDC2A54-0773-4999-8272-FAA3966177B0@inceptive.be> References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> <8EDC2A54-0773-4999-8272-FAA3966177B0@inceptive.be> Message-ID: On 18.01.2016, at 19:24, Johan Brichau wrote: > >> On 18 Jan 2016, at 19:19, Tobias Pape wrote: >> >> Oh that would be just great. My students would rejoice to no longer see the spurious "503 gateway timed out" messaged ;) >> >> Best regards >> -Tobias > > Are you referring to file uploads on SS3? It is not the SS3, it is another system :) > There, the file is stored inside of Gemstone db, right? For ss3 on GS, thats right. It actually still works well that way, but we don't have many files over 6MB, and more and more packages are over at github so I don't see any need to implement upload improvements there. On the other system, however, we hat students trying to upload some 300 MB files. In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire to use nginx upload. Best regards -Tobias > > Johan > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From johan at inceptive.be Mon Jan 18 18:43:52 2016 From: johan at inceptive.be (Johan Brichau) Date: Mon Jan 18 18:43:57 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> <8EDC2A54-0773-4999-8272-FAA3966177B0@inceptive.be> Message-ID: <8C07E94C-7FC2-49D1-A0CA-584C6BD2FCCD@inceptive.be> > On 18 Jan 2016, at 19:34, Tobias Pape wrote: > > For ss3 on GS, thats right. It actually still works well that way, but we don't have many files > over 6MB, and more and more packages are over at github so I don't see any need to > implement upload improvements there. True. I was just going to say it would not really help there since you need the file inside the db anyway. > On the other system, however, we hat students trying to upload some 300 MB files. > In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire > to use nginx upload. Ok, I?m currently heads-down in some other things, so let me take a look tomorrow on this. This will work for Robert as well, of course :) Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160118/7eabbf83/attachment.htm From Das.Linux at gmx.de Mon Jan 18 18:45:58 2016 From: Das.Linux at gmx.de (Tobias Pape) Date: Mon Jan 18 18:46:01 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: <8C07E94C-7FC2-49D1-A0CA-584C6BD2FCCD@inceptive.be> References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> <8EDC2A54-0773-4999-8272-FAA3966177B0@inceptive.be> <8C07E94C-7FC2-49D1-A0CA-584C6BD2FCCD@inceptive.be> Message-ID: On 18.01.2016, at 19:43, Johan Brichau wrote: > >> On 18 Jan 2016, at 19:34, Tobias Pape wrote: >> >> For ss3 on GS, thats right. It actually still works well that way, but we don't have many files >> over 6MB, and more and more packages are over at github so I don't see any need to >> implement upload improvements there. > > True. I was just going to say it would not really help there since you need the file inside the db anyway. > >> On the other system, however, we hat students trying to upload some 300 MB files. >> In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire >> to use nginx upload. > > Ok, I?m currently heads-down in some other things, so let me take a look tomorrow on this. > This will work for Robert as well, of course :) > Take your time :) And, thank you. Best regards -Tobias > Johan > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From johan at inceptive.be Mon Jan 18 18:58:47 2016 From: johan at inceptive.be (Johan Brichau) Date: Mon Jan 18 18:58:51 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> <8EDC2A54-0773-4999-8272-FAA3966177B0@inceptive.be> <8C07E94C-7FC2-49D1-A0CA-584C6BD2FCCD@inceptive.be> Message-ID: Actually, I was quickly trying to find Nick?s blog post on the wayback machine but it?s not archived :( I did find this: http://www.squeaksource.com/fileupload/ I did not check what?s in there but until I can take a look, here it is already ;) cheers Johan > On 18 Jan 2016, at 19:45, Tobias Pape wrote: > > > On 18.01.2016, at 19:43, Johan Brichau > wrote: > >> >>> On 18 Jan 2016, at 19:34, Tobias Pape wrote: >>> >>> For ss3 on GS, thats right. It actually still works well that way, but we don't have many files >>> over 6MB, and more and more packages are over at github so I don't see any need to >>> implement upload improvements there. >> >> True. I was just going to say it would not really help there since you need the file inside the db anyway. >> >>> On the other system, however, we hat students trying to upload some 300 MB files. >>> In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire >>> to use nginx upload. >> >> Ok, I?m currently heads-down in some other things, so let me take a look tomorrow on this. >> This will work for Robert as well, of course :) >> > > Take your time :) > And, thank you. > > Best regards > -Tobias > >> Johan >> _______________________________________________ >> seaside mailing list >> seaside@lists.squeakfoundation.org >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160118/8bcf25c0/attachment-0001.htm From Das.Linux at gmx.de Mon Jan 18 19:30:45 2016 From: Das.Linux at gmx.de (Tobias Pape) Date: Mon Jan 18 19:30:49 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> <8EDC2A54-0773-4999-8272-FAA3966177B0@inceptive.be> <8C07E94C-7FC2-49D1-A0CA-584C6BD2FCCD@inceptive.be> Message-ID: <19053A84-B692-4BF8-A798-2CDD991B6904@gmx.de> On 18.01.2016, at 19:58, Johan Brichau wrote: > Actually, > > I was quickly trying to find Nick?s blog post on the wayback machine but it?s not archived :( > > I did find this: http://www.squeaksource.com/fileupload/ > > I did not check what?s in there but until I can take a look, here it is already ;) > :D > cheers > Johan > >> On 18 Jan 2016, at 19:45, Tobias Pape wrote: >> >> >> On 18.01.2016, at 19:43, Johan Brichau wrote: >> >>> >>>> On 18 Jan 2016, at 19:34, Tobias Pape wrote: >>>> >>>> For ss3 on GS, thats right. It actually still works well that way, but we don't have many files >>>> over 6MB, and more and more packages are over at github so I don't see any need to >>>> implement upload improvements there. >>> >>> True. I was just going to say it would not really help there since you need the file inside the db anyway. >>> >>>> On the other system, however, we hat students trying to upload some 300 MB files. >>>> In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire >>>> to use nginx upload. >>> >>> Ok, I?m currently heads-down in some other things, so let me take a look tomorrow on this. >>> This will work for Robert as well, of course :) >>> >> >> Take your time :) >> And, thank you. >> >> Best regards >> -Tobias >> >>> Johan >>> _______________________________________________ >>> seaside mailing list >>> seaside@lists.squeakfoundation.org >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> _______________________________________________ >> seaside mailing list >> seaside@lists.squeakfoundation.org >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From pbpublist at gmail.com Mon Jan 18 20:04:16 2016 From: pbpublist at gmail.com (Phil (list)) Date: Mon Jan 18 20:04:21 2016 Subject: [Seaside] ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: References: Message-ID: <1453147456.3405.85.camel@gmail.com> On Mon, 2016-01-18 at 13:39 +0100, Robert Kuszinger wrote: >? > Answering on how it goes: > > 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB > upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is > low" warning window appeared. I've clicked on "Proceed" just for > curiosity but no reaction in the Pharo gui... hmmm... > Assuming you're running the Cog VM you're likely hitting its memory limits. ?I don't recall the exact number, but once you get to the 400- 500 Meg range you're hitting the absolute limit of how much RAM Cog can deal with. ?Also as you've noticed, once you get over about 300 Meg things will start to slow down rather dramatically. ?From what I've read Spur roughly doubles the amount of RAM that the VM can work with and should perform much better at large image sizes. ?You might want to consider handling large file uploads like this outside of the image (i.e. still have your front end in Seaside but handle the actual upload via an external mechanism) ? > > thanks > Robert > From stephan at stack.nl Tue Jan 19 01:14:15 2016 From: stephan at stack.nl (Stephan Eggermont) Date: Tue Jan 19 01:14:25 2016 Subject: [Seaside] Re: ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: <1453147456.3405.85.camel@gmail.com> References: <1453147456.3405.85.camel@gmail.com> Message-ID: On 18/01/16 21:04, Phil (list) wrote: > On Mon, 2016-01-18 at 13:39 +0100, Robert Kuszinger wrote: >> >> Answering on how it goes: >> >> 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB >> upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is >> low" warning window appeared. I've clicked on "Proceed" just for >> curiosity but no reaction in the Pharo gui... hmmm... >> > > Assuming you're running the Cog VM you're likely hitting its memory > limits. I don't recall the exact number, but once you get to the 400- > 500 Meg range you're hitting the absolute limit of how much RAM Cog can > deal with. That is just default limits. On a mac I've worked with about 2GB. There used to be some limitation on windows, I think there was an issue in 2011 on windows where there was a limit closer to 512 GB, but AFAIk that was fixed. Stephan From pbpublist at gmail.com Tue Jan 19 01:36:18 2016 From: pbpublist at gmail.com (Phil (list)) Date: Tue Jan 19 01:36:22 2016 Subject: [Seaside] Re: ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: References: <1453147456.3405.85.camel@gmail.com> Message-ID: <1453167378.3405.131.camel@gmail.com> On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote: > On 18/01/16 21:04, Phil (list) wrote: > > On Mon, 2016-01-18 at 13:39 +0100, Robert Kuszinger wrote: > > > > > > Answering on how it goes: > > > > > > 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ > > > 120 MB > > > upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space > > > is > > > low" warning window appeared. I've clicked on "Proceed" just for > > > curiosity but no reaction in the Pharo gui... hmmm... > > > > > > > Assuming you're running the Cog VM you're likely hitting its memory > > limits.??I don't recall the exact number, but once you get to the > > 400- > > 500 Meg range you're hitting the absolute limit of how much RAM Cog > > can > > deal with. > > That is just default limits. On a mac I've worked with about 2GB. > There? > used to be some limitation on windows, I think there was an issue in? > 2011 on windows where there was a limit closer to 512 GB, but AFAIk > that? > was fixed. > Is that something that can be changed without a custom build? ?If so, I'd love to learn how. ?I was under the impression that this was a hard limit in Cog (that varies a bit by platform, but still well below 1G) > Stephan > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From pdebruic at gmail.com Tue Jan 19 05:26:37 2016 From: pdebruic at gmail.com (Paul DeBruicker) Date: Tue Jan 19 05:48:38 2016 Subject: [Seaside] Re: ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: <1453167378.3405.131.camel@gmail.com> References: <1453147456.3405.85.camel@gmail.com> <1453167378.3405.131.camel@gmail.com> Message-ID: <1453181197848-4872528.post@n4.nabble.com> Phil (list) wrote > On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote: > >> > Assuming you're running the Cog VM you're likely hitting its memory >> > limits.??I don't recall the exact number, but once you get to the >> > 400- >> > 500 Meg range you're hitting the absolute limit of how much RAM Cog >> > can >> > deal with. >> >> That is just default limits. On a mac I've worked with about 2GB. >> There? >> used to be some limitation on windows, I think there was an issue in? >> 2011 on windows where there was a limit closer to 512 GB, but AFAIk >> that? >> was fixed. >> > > Is that something that can be changed without a custom build? ?If so, > I'd love to learn how. ?I was under the impression that this was a hard > limit in Cog (that varies a bit by platform, but still well below 1G) On the mac you can change the limit in /Pharo.app/Contents/Info.plist by adjusting the value for the SqueakMaxHeapSize setting and restarting the image. -- View this message in context: http://forum.world.st/Re-ZINC-Kom-dilemma-Fwd-WADebugErrorHandler-problem-tp4872293p4872528.html Sent from the Seaside General mailing list archive at Nabble.com. From kuszinger at giscom.hu Tue Jan 19 07:40:29 2016 From: kuszinger at giscom.hu (Robert Kuszinger) Date: Tue Jan 19 07:40:42 2016 Subject: [Seaside] Re: ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: <1453181197848-4872528.post@n4.nabble.com> References: <1453147456.3405.85.camel@gmail.com> <1453167378.3405.131.camel@gmail.com> <1453181197848-4872528.post@n4.nabble.com> Message-ID: Paul, Phil, thanks for the ideas and information. I'll also try this way also to ensure every factor to be optimal. Summarizing all the comments by far it seems that the bulletproof service infrastructure is streaming upload to disk or possibly outside the Smalltalk VM (nginx way) and keep other application data inside. Memory limits are still interesting for safe handling of a larger parallel load. thanks R Paul DeBruicker ezt ?rta (id?pont: 2016. jan. 19., K, 6:48): > Phil (list) wrote > > On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote: > > > >> > Assuming you're running the Cog VM you're likely hitting its memory > >> > limits. I don't recall the exact number, but once you get to the > >> > 400- > >> > 500 Meg range you're hitting the absolute limit of how much RAM Cog > >> > can > >> > deal with. > >> > >> That is just default limits. On a mac I've worked with about 2GB. > >> There > >> used to be some limitation on windows, I think there was an issue in > >> 2011 on windows where there was a limit closer to 512 GB, but AFAIk > >> that > >> was fixed. > >> > > > > Is that something that can be changed without a custom build? If so, > > I'd love to learn how. I was under the impression that this was a hard > > limit in Cog (that varies a bit by platform, but still well below 1G) > > On the mac you can change the limit in > > /Pharo.app/Contents/Info.plist > > by adjusting the value for the SqueakMaxHeapSize setting and restarting the > image. > > > > > -- > View this message in context: > http://forum.world.st/Re-ZINC-Kom-dilemma-Fwd-WADebugErrorHandler-problem-tp4872293p4872528.html > Sent from the Seaside General mailing list archive at Nabble.com. > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160119/b7548bdb/attachment.htm From kuszinger at giscom.hu Tue Jan 19 14:15:00 2016 From: kuszinger at giscom.hu (Robert Kuszinger) Date: Tue Jan 19 14:15:11 2016 Subject: [Seaside] Re: ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: References: <1453147456.3405.85.camel@gmail.com> <1453167378.3405.131.camel@gmail.com> <1453181197848-4872528.post@n4.nabble.com> Message-ID: Hi Everyone, Just for information: it seems that there is *no nginx with upload module* on Windows. However in Apache doc there is also an upload providing fairly the same. I'm now testing it on Windows and if it works I follow with the Smalltalk ending in my Seaside app. regards Robert Robert Kuszinger ezt ?rta (id?pont: 2016. jan. 19., K, 8:40): > > Paul, Phil, > > thanks for the ideas and information. I'll also try this way also to > ensure every factor to be optimal. > Summarizing all the comments by far it seems that the bulletproof service > infrastructure is streaming upload to disk or possibly outside the > Smalltalk VM (nginx way) and keep other application data inside. Memory > limits are still interesting for safe handling of a larger parallel load. > > thanks > R > > > > > Paul DeBruicker ezt ?rta (id?pont: 2016. jan. 19., > K, 6:48): > >> Phil (list) wrote >> > On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote: >> > >> >> > Assuming you're running the Cog VM you're likely hitting its memory >> >> > limits. I don't recall the exact number, but once you get to the >> >> > 400- >> >> > 500 Meg range you're hitting the absolute limit of how much RAM Cog >> >> > can >> >> > deal with. >> >> >> >> That is just default limits. On a mac I've worked with about 2GB. >> >> There >> >> used to be some limitation on windows, I think there was an issue in >> >> 2011 on windows where there was a limit closer to 512 GB, but AFAIk >> >> that >> >> was fixed. >> >> >> > >> > Is that something that can be changed without a custom build? If so, >> > I'd love to learn how. I was under the impression that this was a hard >> > limit in Cog (that varies a bit by platform, but still well below 1G) >> >> On the mac you can change the limit in >> >> /Pharo.app/Contents/Info.plist >> >> by adjusting the value for the SqueakMaxHeapSize setting and restarting >> the >> image. >> >> >> >> >> -- >> View this message in context: >> http://forum.world.st/Re-ZINC-Kom-dilemma-Fwd-WADebugErrorHandler-problem-tp4872293p4872528.html >> Sent from the Seaside General mailing list archive at Nabble.com. >> _______________________________________________ >> seaside mailing list >> seaside@lists.squeakfoundation.org >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160119/6d574733/attachment-0001.htm From jtuchel at objektfabrik.de Tue Jan 19 14:21:58 2016 From: jtuchel at objektfabrik.de (jtuchel@objektfabrik.de) Date: Tue Jan 19 14:22:01 2016 Subject: [Seaside] Re: ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: References: <1453147456.3405.85.camel@gmail.com> <1453167378.3405.131.camel@gmail.com> <1453181197848-4872528.post@n4.nabble.com> Message-ID: <569E4686.4030104@objektfabrik.de> Hi Robert, I was fascinated by the idea of having apache / nginx leave Seaside alone during a file uload until it is finished and tried to find out if something like that is available for Apache. I couldn't find such a thing, but I am sure our Kontolino App could benefit a lot from it, both CPU and memory-wise. So I'd be grateful if you could provide a link to what you found... Joachim Am 19.01.16 um 15:15 schrieb Robert Kuszinger: > Hi Everyone, > > Just for information: it seems that there is *no nginx with upload > module* on Windows. However in Apache doc there is also an upload > providing fairly the same. I'm now testing it on Windows and if it > works I follow with the Smalltalk ending in my Seaside app. > > > regards > Robert > > > > > Robert Kuszinger > > ezt ?rta (id?pont: 2016. jan. 19., K, 8:40): > > > Paul, Phil, > > thanks for the ideas and information. I'll also try this way also > to ensure every factor to be optimal. > Summarizing all the comments by far it seems that the bulletproof > service infrastructure is streaming upload to disk or possibly > outside the Smalltalk VM (nginx way) and keep other application > data inside. Memory limits are still interesting for safe handling > of a larger parallel load. > > thanks > R > > > > > Paul DeBruicker > > ezt ?rta (id?pont: 2016. jan. 19., K, 6:48): > > Phil (list) wrote > > On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote: > > > >> > Assuming you're running the Cog VM you're likely hitting > its memory > >> > limits. I don't recall the exact number, but once you > get to the > >> > 400- > >> > 500 Meg range you're hitting the absolute limit of how > much RAM Cog > >> > can > >> > deal with. > >> > >> That is just default limits. On a mac I've worked with > about 2GB. > >> There > >> used to be some limitation on windows, I think there was an > issue in > >> 2011 on windows where there was a limit closer to 512 GB, > but AFAIk > >> that > >> was fixed. > >> > > > > Is that something that can be changed without a custom > build? If so, > > I'd love to learn how. I was under the impression that this > was a hard > > limit in Cog (that varies a bit by platform, but still well > below 1G) > > On the mac you can change the limit in > > /Pharo.app/Contents/Info.plist > > by adjusting the value for the SqueakMaxHeapSize setting and > restarting the > image. > > > > > -- > View this message in context: > http://forum.world.st/Re-ZINC-Kom-dilemma-Fwd-WADebugErrorHandler-problem-tp4872293p4872528.html > Sent from the Seaside General mailing list archive at Nabble.com. > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside -- ----------------------------------------------------------------------- Objektfabrik Joachim Tuchel mailto:jtuchel@objektfabrik.de Fliederweg 1 http://www.objektfabrik.de D-71640 Ludwigsburg http://joachimtuchel.wordpress.com Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160119/a53a58a6/attachment.htm From jtuchel at objektfabrik.de Tue Jan 19 14:30:47 2016 From: jtuchel at objektfabrik.de (jtuchel@objektfabrik.de) Date: Tue Jan 19 14:30:51 2016 Subject: [Seaside] Re: ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: <569E4686.4030104@objektfabrik.de> References: <1453147456.3405.85.camel@gmail.com> <1453167378.3405.131.camel@gmail.com> <1453181197848-4872528.post@n4.nabble.com> <569E4686.4030104@objektfabrik.de> Message-ID: <569E4897.20708@objektfabrik.de> Robert, so I took another attempt and found this: http://apache.webthing.com/mod_upload/ Is this what you referred to? Joachim Am 19.01.16 um 15:21 schrieb jtuchel@objektfabrik.de: > Hi Robert, > > I was fascinated by the idea of having apache / nginx leave Seaside > alone during a file uload until it is finished and tried to find out > if something like that is available for Apache. I couldn't find such a > thing, but I am sure our Kontolino App could benefit a lot from it, > both CPU and memory-wise. > > So I'd be grateful if you could provide a link to what you found... > > Joachim > > > > Am 19.01.16 um 15:15 schrieb Robert Kuszinger: >> Hi Everyone, >> >> Just for information: it seems that there is *no nginx with upload >> module* on Windows. However in Apache doc there is also an upload >> providing fairly the same. I'm now testing it on Windows and if it >> works I follow with the Smalltalk ending in my Seaside app. >> >> >> regards >> Robert >> >> >> >> >> Robert Kuszinger > >> ezt ?rta (id?pont: 2016. jan. 19., K, 8:40): >> >> >> Paul, Phil, >> >> thanks for the ideas and information. I'll also try this way also >> to ensure every factor to be optimal. >> Summarizing all the comments by far it seems that the bulletproof >> service infrastructure is streaming upload to disk or possibly >> outside the Smalltalk VM (nginx way) and keep other application >> data inside. Memory limits are still interesting for safe >> handling of a larger parallel load. >> >> thanks >> R >> >> >> >> >> Paul DeBruicker > >> ezt ?rta (id?pont: 2016. jan. 19., K, 6:48): >> >> Phil (list) wrote >> > On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote: >> > >> >> > Assuming you're running the Cog VM you're likely hitting >> its memory >> >> > limits. I don't recall the exact number, but once you >> get to the >> >> > 400- >> >> > 500 Meg range you're hitting the absolute limit of how >> much RAM Cog >> >> > can >> >> > deal with. >> >> >> >> That is just default limits. On a mac I've worked with >> about 2GB. >> >> There >> >> used to be some limitation on windows, I think there was >> an issue in >> >> 2011 on windows where there was a limit closer to 512 GB, >> but AFAIk >> >> that >> >> was fixed. >> >> >> > >> > Is that something that can be changed without a custom >> build? If so, >> > I'd love to learn how. I was under the impression that >> this was a hard >> > limit in Cog (that varies a bit by platform, but still well >> below 1G) >> >> On the mac you can change the limit in >> >> /Pharo.app/Contents/Info.plist >> >> by adjusting the value for the SqueakMaxHeapSize setting and >> restarting the >> image. >> >> >> >> >> -- >> View this message in context: >> http://forum.world.st/Re-ZINC-Kom-dilemma-Fwd-WADebugErrorHandler-problem-tp4872293p4872528.html >> Sent from the Seaside General mailing list archive at Nabble.com. >> _______________________________________________ >> seaside mailing list >> seaside@lists.squeakfoundation.org >> >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> >> >> _______________________________________________ >> seaside mailing list >> seaside@lists.squeakfoundation.org >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > > -- > ----------------------------------------------------------------------- > Objektfabrik Joachim Tuchelmailto:jtuchel@objektfabrik.de > Fliederweg 1http://www.objektfabrik.de > D-71640 Ludwigsburghttp://joachimtuchel.wordpress.com > Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1 > > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside -- ----------------------------------------------------------------------- Objektfabrik Joachim Tuchel mailto:jtuchel@objektfabrik.de Fliederweg 1 http://www.objektfabrik.de D-71640 Ludwigsburg http://joachimtuchel.wordpress.com Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160119/02ad222d/attachment-0001.htm From stephan at stack.nl Tue Jan 19 15:29:05 2016 From: stephan at stack.nl (Stephan Eggermont) Date: Tue Jan 19 15:29:22 2016 Subject: [Seaside] Re: ZINC - Kom dilemma Fwd: WADebugErrorHandler problem In-Reply-To: References: <1453147456.3405.85.camel@gmail.com> <1453167378.3405.131.camel@gmail.com> <1453181197848-4872528.post@n4.nabble.com> Message-ID: On 19-01-16 15:15, Robert Kuszinger wrote: > Hi Everyone, > > Just for information: it seems that there is *no nginx with upload > module* on Windows. However in Apache doc there is also an upload > providing fairly the same. I'm now testing it on Windows and if it works > I follow with the Smalltalk ending in my Seaside app. I had to compile it myself on linux, but that is a whileago. Stephan From self at je77.com Tue Jan 19 18:11:26 2016 From: self at je77.com (J.F. Rick) Date: Tue Jan 19 18:11:38 2016 Subject: [Seaside] Anonymous Component Message-ID: I'm using AJAX a decent amount with my Seaside application. One common thing is to replace an IDed element with a component using something like: s << (s jQuery: #event) replaceWith: self. inside a "html jQuery ajax script: [ :s | ]" block. Is there a way to do this replacement without using a component that implements the renderContentOn: message? For instance, what could I just do to replace the #event component with this HTML: 'Success'? I'm hoping there's something I can do along the lines of: s << (s jQuery: #event) replaceWith: [ :html | html html: ''. html text: (self isSuccess ifTrue: [ 'Success' ] ifFalse: [ 'Failed' ]). html html: '' ]. Thanks, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160119/e2cebead/attachment.htm From marianopeck at gmail.com Tue Jan 19 18:15:19 2016 From: marianopeck at gmail.com (Mariano Martinez Peck) Date: Tue Jan 19 18:15:21 2016 Subject: [Seaside] Anonymous Component In-Reply-To: References: Message-ID: On Tue, Jan 19, 2016 at 3:11 PM, J.F. Rick wrote: > I'm using AJAX a decent amount with my Seaside application. One common > thing is to replace an IDed element with a component using something like: > s << (s jQuery: #event) replaceWith: self. > inside a "html jQuery ajax script: [ :s | ]" block. > > Is there a way to do this replacement without using a component that > implements the renderContentOn: message? For instance, what could I just do > to replace the #event component with this HTML: 'Success'? I'm > hoping there's something I can do along the lines of: > s << (s jQuery: #event) replaceWith: [ :html | > html html: ''. > html text: (self isSuccess > ifTrue: [ 'Success' ] > ifFalse: [ 'Failed' ]). > html html: '' ]. > > Jeff, I think that should work, exactly as you typed it. It doesn't? -- Mariano http://marianopeck.wordpress.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160119/71b61967/attachment.htm From emaringolo at gmail.com Tue Jan 19 18:19:25 2016 From: emaringolo at gmail.com (Esteban A. Maringolo) Date: Tue Jan 19 18:20:07 2016 Subject: [Seaside] Anonymous Component In-Reply-To: References: Message-ID: 2016-01-19 15:11 GMT-03:00 J.F. Rick : > I'm using AJAX a decent amount with my Seaside application. One common thing > is to replace an IDed element with a component using something like: > s << (s jQuery: #event) replaceWith: self. > inside a "html jQuery ajax script: [ :s | ]" block. > > Is there a way to do this replacement without using a component that > implements the renderContentOn: message? For instance, what could I just do > to replace the #event component with this HTML: 'Success'? I'm hoping > there's something I can do along the lines of: > s << (s jQuery: #event) replaceWith: [ :html | > html html: ''. > html text: (self isSuccess > ifTrue: [ 'Success' ] > ifFalse: [ 'Failed' ]). > html html: '' ]. Unless you do some server-side component instantiation or instVar assigment, then what you get to the block is just an WAHtmlCanvas (the html var), it doesn't matter what/how you render on it. So that should work out of the box. Esteban A. Maringolo From johan at inceptive.be Wed Jan 20 22:40:23 2016 From: johan at inceptive.be (Johan Brichau) Date: Wed Jan 20 22:40:27 2016 Subject: [Seaside] Anonymous Component In-Reply-To: References: Message-ID: <8D092197-4624-461F-938D-D9A681081522@inceptive.be> Jeff, As others have mentioned, this should work as you wrote it. The argument to replaceWith: is any renderable. This can be component but also a rendering block ( [:html | ?. ] ) Just one thing: It?s better style to write: (html tag: ?b?) with: [ ?. ] instead of html html: ?? cheers Johan > On 19 Jan 2016, at 19:11, J.F. Rick wrote: > > I'm using AJAX a decent amount with my Seaside application. One common thing is to replace an IDed element with a component using something like: > s << (s jQuery: #event) replaceWith: self. > inside a "html jQuery ajax script: [ :s | ]" block. > > Is there a way to do this replacement without using a component that implements the renderContentOn: message? For instance, what could I just do to replace the #event component with this HTML: 'Success'? I'm hoping there's something I can do along the lines of: > s << (s jQuery: #event) replaceWith: [ :html | > html html: ''. > html text: (self isSuccess > ifTrue: [ 'Success' ] > ifFalse: [ 'Failed' ]). > html html: '' ]. > > Thanks, > > Jeff > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside From self at je77.com Thu Jan 21 21:07:34 2016 From: self at je77.com (J.F. Rick) Date: Thu Jan 21 21:07:45 2016 Subject: [Seaside] Anonymous Component In-Reply-To: <8D092197-4624-461F-938D-D9A681081522@inceptive.be> References: <8D092197-4624-461F-938D-D9A681081522@inceptive.be> Message-ID: Too funny that I guessed something that would actually work. Awesome, Jeff On Wed, Jan 20, 2016 at 5:40 PM Johan Brichau wrote: > Jeff, > > As others have mentioned, this should work as you wrote it. > The argument to replaceWith: is any renderable. This can be component but > also a rendering block ( [:html | ?. ] ) > > Just one thing: > > It?s better style to write: > > (html tag: ?b?) with: [ ?. ] instead of html html: ?? > > cheers > Johan > > > On 19 Jan 2016, at 19:11, J.F. Rick wrote: > > > > I'm using AJAX a decent amount with my Seaside application. One common > thing is to replace an IDed element with a component using something like: > > s << (s jQuery: #event) replaceWith: self. > > inside a "html jQuery ajax script: [ :s | ]" block. > > > > Is there a way to do this replacement without using a component that > implements the renderContentOn: message? For instance, what could I just do > to replace the #event component with this HTML: 'Success'? I'm > hoping there's something I can do along the lines of: > > s << (s jQuery: #event) replaceWith: [ :html | > > html html: ''. > > html text: (self isSuccess > > ifTrue: [ 'Success' ] > > ifFalse: [ 'Failed' ]). > > html html: '' ]. > > > > Thanks, > > > > Jeff > > _______________________________________________ > > seaside mailing list > > seaside@lists.squeakfoundation.org > > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160121/f9be2149/attachment-0001.htm From ramon.leon at allresnet.com Fri Jan 22 18:14:55 2016 From: ramon.leon at allresnet.com (Ramon Leon) Date: Fri Jan 22 18:15:27 2016 Subject: [Seaside] Seaside / .Net Job Message-ID: <56A2719F.1050106@allresnet.com> We're looking for a full time remote mid-level front-end Smalltalk/Seaside/? developer with some additional non Smalltalk skills. 4+ years experience or equivalent talent, with strong emphasis on front-end JavaScript and AJAX development. Must be skilled in client-side web technologies: HTML, CSS, JavaScript and jQuery (must have skills), Prototype (can learn). Must be familiar with tools like firebug/chrome and Visual Studio. Our primary application is a Seaside 2.8 web app running on Pharo 1.1 Smalltalk. A little out of date but it serves us well and it's in need of some additions. There won't be enough Smalltalk work for a full time position, so you need to have some additional skills with which we can keep you busy the rest of the time. So we're interested in either a Seaside Smalltalk / JavaScript guy, or a Seaside Smalltalk / .Net guy. We are interested in U.S. residents only, this is full time remote work 9 to 5 type thing. So if you're more the Smalltalk/JavaScript type please visit: http://www.alliancereservations.com/javascript-test.html and follow the instructions there, if you're the Smalltalk/.Net type please visit: http://www.alliancereservations.com/developer-test.html and follow the instructions there. Along with those, find a way to show me your Smalltalk/Seaside skills, I don't have a specific test for it, but your code is your resume; it's what I'm looking at. Compensation: 60-75k depending on skill set. -- Ramon Leon Chief Technical Officer Alliance Reservations Network From johan at inceptive.be Sun Jan 31 17:30:18 2016 From: johan at inceptive.be (Johan Brichau) Date: Sun Jan 31 17:30:23 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: <19053A84-B692-4BF8-A798-2CDD991B6904@gmx.de> References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> <8EDC2A54-0773-4999-8272-FAA3966177B0@inceptive.be> <8C07E94C-7FC2-49D1-A0CA-584C6BD2FCCD@inceptive.be> <19053A84-B692-4BF8-A798-2CDD991B6904@gmx.de> Message-ID: <9F573AA6-7E6B-4281-96A8-2F696934AD01@inceptive.be> Hi Tobias, all, I just added a first version Seaside-ExternalFileUpload package to the Seaside32 repository and currently writing up a blog post detailing the nginx configuration and all other steps for it. If I still have time tonight, I should have a first draft of the post online. Btw, we also have Ajax file uploads in Seaside 3.2 > On 18 Jan 2016, at 20:30, Tobias Pape wrote: > > > On 18.01.2016, at 19:58, Johan Brichau > wrote: > >> Actually, >> >> I was quickly trying to find Nick?s blog post on the wayback machine but it?s not archived :( >> >> I did find this: http://www.squeaksource.com/fileupload/ >> >> I did not check what?s in there but until I can take a look, here it is already ;) >> > > :D > >> cheers >> Johan >> >>> On 18 Jan 2016, at 19:45, Tobias Pape wrote: >>> >>> >>> On 18.01.2016, at 19:43, Johan Brichau wrote: >>> >>>> >>>>> On 18 Jan 2016, at 19:34, Tobias Pape wrote: >>>>> >>>>> For ss3 on GS, thats right. It actually still works well that way, but we don't have many files >>>>> over 6MB, and more and more packages are over at github so I don't see any need to >>>>> implement upload improvements there. >>>> >>>> True. I was just going to say it would not really help there since you need the file inside the db anyway. >>>> >>>>> On the other system, however, we hat students trying to upload some 300 MB files. >>>>> In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire >>>>> to use nginx upload. >>>> >>>> Ok, I?m currently heads-down in some other things, so let me take a look tomorrow on this. >>>> This will work for Robert as well, of course :) >>>> >>> >>> Take your time :) >>> And, thank you. >>> >>> Best regards >>> -Tobias >>> >>>> Johan >>>> _______________________________________________ >>>> seaside mailing list >>>> seaside@lists.squeakfoundation.org >>>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >>> >>> _______________________________________________ >>> seaside mailing list >>> seaside@lists.squeakfoundation.org >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> _______________________________________________ >> seaside mailing list >> seaside@lists.squeakfoundation.org >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160131/895e3bc6/attachment.htm From kuszinger at giscom.hu Sun Jan 31 17:33:32 2016 From: kuszinger at giscom.hu (Robert Kuszinger) Date: Sun Jan 31 17:33:44 2016 Subject: Upload large, was: Re: [Seaside] ZINC - Kom ... In-Reply-To: <9F573AA6-7E6B-4281-96A8-2F696934AD01@inceptive.be> References: <98771D4A-2BD7-4228-B765-DCDD891BBDAC@stfx.eu> <783FD498-A3B3-41CD-BAB3-807E5D33F9EB@inceptive.be> <8EDC2A54-0773-4999-8272-FAA3966177B0@inceptive.be> <8C07E94C-7FC2-49D1-A0CA-584C6BD2FCCD@inceptive.be> <19053A84-B692-4BF8-A798-2CDD991B6904@gmx.de> <9F573AA6-7E6B-4281-96A8-2F696934AD01@inceptive.be> Message-ID: Hello Everyone! I've tested the same on Win/Apache as promised. I was able to put the mod_upload to work but it produced weird result. I was able to catch the raw information coming from the Apache filter. But on the other side: the uploaded file in the temp directory was bad... I may need more time with Apache Upload to see what it produces. Best regards Robert Johan Brichau ezt ?rta (id?pont: 2016. jan. 31., V, 18:30): > Hi Tobias, all, > > I just added a first version Seaside-ExternalFileUpload package to the > Seaside32 repository and currently writing up a blog post detailing the > nginx configuration and all other steps for it. > > If I still have time tonight, I should have a first draft of the post > online. > > Btw, we also have Ajax file uploads in Seaside 3.2 > > On 18 Jan 2016, at 20:30, Tobias Pape wrote: > > > On 18.01.2016, at 19:58, Johan Brichau wrote: > > Actually, > > I was quickly trying to find Nick?s blog post on the wayback machine but > it?s not archived :( > > I did find this: http://www.squeaksource.com/fileupload/ > > I did not check what?s in there but until I can take a look, here it is > already ;) > > > :D > > cheers > Johan > > On 18 Jan 2016, at 19:45, Tobias Pape wrote: > > > On 18.01.2016, at 19:43, Johan Brichau wrote: > > > On 18 Jan 2016, at 19:34, Tobias Pape wrote: > > For ss3 on GS, thats right. It actually still works well that way, but we > don't have many files > over 6MB, and more and more packages are over at github so I don't see any > need to > implement upload improvements there. > > > True. I was just going to say it would not really help there since you > need the file inside the db anyway. > > On the other system, however, we hat students trying to upload some 300 MB > files. > In principle that was fine, but Nginx+FCGI+Seaside took way too long, > hence my desire > to use nginx upload. > > > Ok, I?m currently heads-down in some other things, so let me take a look > tomorrow on this. > This will work for Robert as well, of course :) > > > Take your time :) > And, thank you. > > Best regards > -Tobias > > Johan > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > > _______________________________________________ > seaside mailing list > seaside@lists.squeakfoundation.org > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20160131/8aac80e5/attachment-0001.htm