Sven,
thanks for the tip. Actually the UTF problem won't appear with Zinc but I also need to do large file uploads. Zinc resets connection somewhere between 10-19 MB File size. Is there a built-in limitation?
Anyway, an idea on how to break through this limit may also help. I need to upload 150-300 MB files regulary with this service. KomHttp did it out-of-the-box.
thanks Robert
2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu:
Robert,
If you are using Pharo, it seems more logical that you would use ZnZincServerAdaptor.
But of course, it might not necessarily have to do with the adaptor.
HTH,
Sven
On 18 Jan 2016, at 13:17, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the tip. Actually the UTF problem won't appear with Zinc but I also need to do large file uploads. Zinc resets connection somewhere between 10-19 MB File size. Is there a built-in limitation?
Yes, one of several used by Zn to protect itself from resource abuse (DOS). The default limit is 16Mb.
Anyway, an idea on how to break through this limit may also help. I need to upload 150-300 MB files regulary with this service. KomHttp did it out-of-the-box.
The limit you are running into is called #maximumEntitySize and can be set with ZnServer>>#maximumEntitySize: - The default limit is 16Mb.
Given an adaptor instance, you can access the server using #server.
So together this would be
ZnZincServerAdaptor default server maximumEntitySize: 300*1024*1024.
Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory.
Let me know how that goes.
thanks Robert
2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu: Robert,
If you are using Pharo, it seems more logical that you would use ZnZincServerAdaptor.
But of course, it might not necessarily have to do with the adaptor.
HTH,
Sven
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Sven,
thanks for the comments. I understand all.
Could you please clarify this:
"*Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory*."
Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation?
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "*Space is low*" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
thanks Robert
2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu:
On 18 Jan 2016, at 13:17, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the tip. Actually the UTF problem won't appear with Zinc but
I also need to do large file uploads. Zinc resets connection somewhere between 10-19 MB File size. Is there a built-in limitation?
Yes, one of several used by Zn to protect itself from resource abuse (DOS). The default limit is 16Mb.
Anyway, an idea on how to break through this limit may also help. I need
to upload 150-300 MB files regulary with this service. KomHttp did it out-of-the-box.
The limit you are running into is called #maximumEntitySize and can be set with ZnServer>>#maximumEntitySize: - The default limit is 16Mb.
Given an adaptor instance, you can access the server using #server.
So together this would be
ZnZincServerAdaptor default server maximumEntitySize: 300*1024*1024.
Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory.
Let me know how that goes.
thanks Robert
2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu: Robert,
If you are using Pharo, it seems more logical that you would use
ZnZincServerAdaptor.
But of course, it might not necessarily have to do with the adaptor.
HTH,
Sven
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Robert,
This is not such an easy problem, you have to really understand HTTP.
BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ?
Now, here is the key idea (pure Zn, no Seaside, quick hack):
(ZnServer startOn: 1701) reader: [ :stream | ZnRequest readStreamingFrom: stream ]; maximumEntitySize: 100*1024*1024; onRequestRespond: [ :req | '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | out binary. ZnUtils streamFrom: req entity stream to: out ]. ZnResponse ok: (ZnEntity text: 'done') ]; yourself.
You would use it like this:
$ echo one two three > data.bin $ curl -X POST -d @data.bin http://localhost:1701 $ cat /tmp/upload.bin one two three
With a 1Mb data file generated from Pharo:
'/tmp/data.txt' asFileReference writeStreamDo: [ :out | 1 * 1024 timesRepeat: [ 1 to: 32 do: [ :each | out << Character alphabet << (each printStringPadded: 5); lf ] ] ]
$ curl -v -X POST --data-binary @data2.bin http://localhost:1701 * Rebuilt URL to: http://localhost:1701/ * Trying ::1... * connect to ::1 port 1701 failed: Connection refused * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 1701 (#0)
POST / HTTP/1.1 Host: localhost:1701 User-Agent: curl/7.43.0 Accept: */* Content-Length: 1048576 Content-Type: application/x-www-form-urlencoded Expect: 100-continue
* Done waiting for 100-continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Type: text/plain;charset=utf-8 < Content-Length: 4 < Date: Mon, 18 Jan 2016 14:56:53 GMT < Server: Zinc HTTP Components 1.0 < * Connection #0 to host localhost left intact done
$ diff data2.bin /tmp/upload.bin
This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly.
Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside.
I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue.
That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ?
Maybe all that is needed is giving Pharo more memory. What platform are you on ?
Sven
On 18 Jan 2016, at 13:39, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the comments. I understand all.
Could you please clarify this:
"Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory."
Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation?
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
thanks Robert
2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu:
On 18 Jan 2016, at 13:17, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the tip. Actually the UTF problem won't appear with Zinc but I also need to do large file uploads. Zinc resets connection somewhere between 10-19 MB File size. Is there a built-in limitation?
Yes, one of several used by Zn to protect itself from resource abuse (DOS). The default limit is 16Mb.
Anyway, an idea on how to break through this limit may also help. I need to upload 150-300 MB files regulary with this service. KomHttp did it out-of-the-box.
The limit you are running into is called #maximumEntitySize and can be set with ZnServer>>#maximumEntitySize: - The default limit is 16Mb.
Given an adaptor instance, you can access the server using #server.
So together this would be
ZnZincServerAdaptor default server maximumEntitySize: 300*1024*1024.
Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory.
Let me know how that goes.
thanks Robert
2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu: Robert,
If you are using Pharo, it seems more logical that you would use ZnZincServerAdaptor.
But of course, it might not necessarily have to do with the adaptor.
HTH,
Sven
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Sven,
thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content...
*Usage*: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it.
No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads.
Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely.
I would be great to create the upload receiving part also with Pharo at least.
All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5.
I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.)
This is the scenario.
Robert
Sven Van Caekenberghe sven@stfx.eu ezt írta (időpont: 2016. jan. 18., H, 16:30):
Robert,
This is not such an easy problem, you have to really understand HTTP.
BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ?
Now, here is the key idea (pure Zn, no Seaside, quick hack):
(ZnServer startOn: 1701) reader: [ :stream | ZnRequest readStreamingFrom: stream ]; maximumEntitySize: 100*1024*1024; onRequestRespond: [ :req | '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | out binary. ZnUtils streamFrom: req entity stream to: out ]. ZnResponse ok: (ZnEntity text: 'done') ]; yourself.
You would use it like this:
$ echo one two three > data.bin $ curl -X POST -d @data.bin http://localhost:1701 $ cat /tmp/upload.bin one two three
With a 1Mb data file generated from Pharo:
'/tmp/data.txt' asFileReference writeStreamDo: [ :out | 1 * 1024 timesRepeat: [ 1 to: 32 do: [ :each | out << Character alphabet << (each printStringPadded: 5); lf ] ] ]
$ curl -v -X POST --data-binary @data2.bin http://localhost:1701
- Rebuilt URL to: http://localhost:1701/
- Trying ::1...
- connect to ::1 port 1701 failed: Connection refused
- Trying 127.0.0.1...
- Connected to localhost (127.0.0.1) port 1701 (#0)
POST / HTTP/1.1 Host: localhost:1701 User-Agent: curl/7.43.0 Accept: */* Content-Length: 1048576 Content-Type: application/x-www-form-urlencoded Expect: 100-continue
- Done waiting for 100-continue
- We are completely uploaded and fine
< HTTP/1.1 200 OK < Content-Type: text/plain;charset=utf-8 < Content-Length: 4 < Date: Mon, 18 Jan 2016 14:56:53 GMT < Server: Zinc HTTP Components 1.0 <
- Connection #0 to host localhost left intact
done
$ diff data2.bin /tmp/upload.bin
This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly.
Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside.
I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue.
That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ?
Maybe all that is needed is giving Pharo more memory. What platform are you on ?
Sven
On 18 Jan 2016, at 13:39, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the comments. I understand all.
Could you please clarify this:
"Technically, it would be possible to write a Zn handler that can accept
a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory."
Is there a chance to create a streaming solution? Is it documented
somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation?
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB
upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
thanks Robert
2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu:
Hey all
just my 2ct while skimming the thread.
I have upload problems with my seaside app and plan to tackle them by utilizing the reverse proxy. In my scenario, that is nginx, wich ships an "upload module" https://www.nginx.com/resources/wiki/modules/upload/
Given that, the upload is handled by the reverse proxy and only when the file is already on the file system, the backend (seaside in this case) would get a notification request.
I plan to implement this within the next 6 weeks, so if I get going something usable, I'll probably hand it back to the seaside community :) Remind me if I forget ;)
best regards -Tobias
On 18.01.2016, at 18:00, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content...
Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it.
No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads.
Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely.
I would be great to create the upload receiving part also with Pharo at least.
All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5.
I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.)
This is the scenario.
Robert
Sven Van Caekenberghe sven@stfx.eu ezt írta (időpont: 2016. jan. 18., H, 16:30): Robert,
This is not such an easy problem, you have to really understand HTTP.
BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ?
Now, here is the key idea (pure Zn, no Seaside, quick hack):
(ZnServer startOn: 1701) reader: [ :stream | ZnRequest readStreamingFrom: stream ]; maximumEntitySize: 100*1024*1024; onRequestRespond: [ :req | '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | out binary. ZnUtils streamFrom: req entity stream to: out ]. ZnResponse ok: (ZnEntity text: 'done') ]; yourself.
You would use it like this:
$ echo one two three > data.bin $ curl -X POST -d @data.bin http://localhost:1701 $ cat /tmp/upload.bin one two three
With a 1Mb data file generated from Pharo:
'/tmp/data.txt' asFileReference writeStreamDo: [ :out | 1 * 1024 timesRepeat: [ 1 to: 32 do: [ :each | out << Character alphabet << (each printStringPadded: 5); lf ] ] ]
$ curl -v -X POST --data-binary @data2.bin http://localhost:1701
- Rebuilt URL to: http://localhost:1701/
- Trying ::1...
- connect to ::1 port 1701 failed: Connection refused
- Trying 127.0.0.1...
- Connected to localhost (127.0.0.1) port 1701 (#0)
POST / HTTP/1.1 Host: localhost:1701 User-Agent: curl/7.43.0 Accept: */* Content-Length: 1048576 Content-Type: application/x-www-form-urlencoded Expect: 100-continue
- Done waiting for 100-continue
- We are completely uploaded and fine
< HTTP/1.1 200 OK < Content-Type: text/plain;charset=utf-8 < Content-Length: 4 < Date: Mon, 18 Jan 2016 14:56:53 GMT < Server: Zinc HTTP Components 1.0 <
- Connection #0 to host localhost left intact
done
$ diff data2.bin /tmp/upload.bin
This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly.
Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside.
I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue.
That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ?
Maybe all that is needed is giving Pharo more memory. What platform are you on ?
Sven
On 18 Jan 2016, at 13:39, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the comments. I understand all.
Could you please clarify this:
"Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory."
Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation?
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
thanks Robert
2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu:
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
I quickly read through the docs of the nginx module: that looks like a very good solution.
Is the plugin good and available for the open source version of nginx ?
On 18 Jan 2016, at 18:56, Tobias Pape Das.Linux@gmx.de wrote:
Hey all
just my 2ct while skimming the thread.
I have upload problems with my seaside app and plan to tackle them by utilizing the reverse proxy. In my scenario, that is nginx, wich ships an "upload module" https://www.nginx.com/resources/wiki/modules/upload/
Given that, the upload is handled by the reverse proxy and only when the file is already on the file system, the backend (seaside in this case) would get a notification request.
I plan to implement this within the next 6 weeks, so if I get going something usable, I'll probably hand it back to the seaside community :) Remind me if I forget ;)
best regards -Tobias
On 18.01.2016, at 18:00, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content...
Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it.
No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads.
Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely.
I would be great to create the upload receiving part also with Pharo at least.
All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5.
I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.)
This is the scenario.
Robert
Sven Van Caekenberghe sven@stfx.eu ezt írta (időpont: 2016. jan. 18., H, 16:30): Robert,
This is not such an easy problem, you have to really understand HTTP.
BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ?
Now, here is the key idea (pure Zn, no Seaside, quick hack):
(ZnServer startOn: 1701) reader: [ :stream | ZnRequest readStreamingFrom: stream ]; maximumEntitySize: 100*1024*1024; onRequestRespond: [ :req | '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | out binary. ZnUtils streamFrom: req entity stream to: out ]. ZnResponse ok: (ZnEntity text: 'done') ]; yourself.
You would use it like this:
$ echo one two three > data.bin $ curl -X POST -d @data.bin http://localhost:1701 $ cat /tmp/upload.bin one two three
With a 1Mb data file generated from Pharo:
'/tmp/data.txt' asFileReference writeStreamDo: [ :out | 1 * 1024 timesRepeat: [ 1 to: 32 do: [ :each | out << Character alphabet << (each printStringPadded: 5); lf ] ] ]
$ curl -v -X POST --data-binary @data2.bin http://localhost:1701
- Rebuilt URL to: http://localhost:1701/
- Trying ::1...
- connect to ::1 port 1701 failed: Connection refused
- Trying 127.0.0.1...
- Connected to localhost (127.0.0.1) port 1701 (#0)
POST / HTTP/1.1 Host: localhost:1701 User-Agent: curl/7.43.0 Accept: */* Content-Length: 1048576 Content-Type: application/x-www-form-urlencoded Expect: 100-continue
- Done waiting for 100-continue
- We are completely uploaded and fine
< HTTP/1.1 200 OK < Content-Type: text/plain;charset=utf-8 < Content-Length: 4 < Date: Mon, 18 Jan 2016 14:56:53 GMT < Server: Zinc HTTP Components 1.0 <
- Connection #0 to host localhost left intact
done
$ diff data2.bin /tmp/upload.bin
This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly.
Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside.
I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue.
That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ?
Maybe all that is needed is giving Pharo more memory. What platform are you on ?
Sven
On 18 Jan 2016, at 13:39, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the comments. I understand all.
Could you please clarify this:
"Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory."
Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation?
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
thanks Robert
2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu:
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Hi Tobias,
This is what we do since years :) There was a blog post online describing all the details but I don’t find it anymore. The only think I can find is Nick Ager’s reply when we had a little trouble setting it up [1]
I might try to separate off the code to spare you some time. It’s quite simple, actually.
[1] http://forum.world.st/Using-nginx-file-upload-module-td3591666.html
Johan
On 18 Jan 2016, at 18:56, Tobias Pape Das.Linux@gmx.de wrote:
Hey all
just my 2ct while skimming the thread.
I have upload problems with my seaside app and plan to tackle them by utilizing the reverse proxy. In my scenario, that is nginx, wich ships an "upload module" https://www.nginx.com/resources/wiki/modules/upload/
Given that, the upload is handled by the reverse proxy and only when the file is already on the file system, the backend (seaside in this case) would get a notification request.
I plan to implement this within the next 6 weeks, so if I get going something usable, I'll probably hand it back to the seaside community :) Remind me if I forget ;)
best regards -Tobias
On 18.01.2016, at 18:00, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content...
Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it.
No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads.
Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely.
I would be great to create the upload receiving part also with Pharo at least.
All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5.
I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.)
This is the scenario.
Robert
Sven Van Caekenberghe sven@stfx.eu ezt írta (időpont: 2016. jan. 18., H, 16:30): Robert,
This is not such an easy problem, you have to really understand HTTP.
BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ?
Now, here is the key idea (pure Zn, no Seaside, quick hack):
(ZnServer startOn: 1701) reader: [ :stream | ZnRequest readStreamingFrom: stream ]; maximumEntitySize: 100*1024*1024; onRequestRespond: [ :req | '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | out binary. ZnUtils streamFrom: req entity stream to: out ]. ZnResponse ok: (ZnEntity text: 'done') ]; yourself.
You would use it like this:
$ echo one two three > data.bin $ curl -X POST -d @data.bin http://localhost:1701 $ cat /tmp/upload.bin one two three
With a 1Mb data file generated from Pharo:
'/tmp/data.txt' asFileReference writeStreamDo: [ :out | 1 * 1024 timesRepeat: [ 1 to: 32 do: [ :each | out << Character alphabet << (each printStringPadded: 5); lf ] ] ]
$ curl -v -X POST --data-binary @data2.bin http://localhost:1701
- Rebuilt URL to: http://localhost:1701/
- Trying ::1...
- connect to ::1 port 1701 failed: Connection refused
- Trying 127.0.0.1...
- Connected to localhost (127.0.0.1) port 1701 (#0)
POST / HTTP/1.1 Host: localhost:1701 User-Agent: curl/7.43.0 Accept: */* Content-Length: 1048576 Content-Type: application/x-www-form-urlencoded Expect: 100-continue
- Done waiting for 100-continue
- We are completely uploaded and fine
< HTTP/1.1 200 OK < Content-Type: text/plain;charset=utf-8 < Content-Length: 4 < Date: Mon, 18 Jan 2016 14:56:53 GMT < Server: Zinc HTTP Components 1.0 <
- Connection #0 to host localhost left intact
done
$ diff data2.bin /tmp/upload.bin
This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly.
Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside.
I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue.
That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ?
Maybe all that is needed is giving Pharo more memory. What platform are you on ?
Sven
On 18 Jan 2016, at 13:39, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the comments. I understand all.
Could you please clarify this:
"Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory."
Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation?
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
thanks Robert
2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu:
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Hi Johan On 18.01.2016, at 19:18, Johan Brichau johan@inceptive.be wrote:
Hi Tobias,
This is what we do since years :) There was a blog post online describing all the details but I don’t find it anymore. The only think I can find is Nick Ager’s reply when we had a little trouble setting it up [1]
I might try to separate off the code to spare you some time. It’s quite simple, actually.
Oh that would be just great. My students would rejoice to no longer see the spurious "503 gateway timed out" messaged ;)
Best regards -Tobias
[1] http://forum.world.st/Using-nginx-file-upload-module-td3591666.html
Johan
On 18 Jan 2016, at 18:56, Tobias Pape Das.Linux@gmx.de wrote:
Hey all
just my 2ct while skimming the thread.
I have upload problems with my seaside app and plan to tackle them by utilizing the reverse proxy. In my scenario, that is nginx, wich ships an "upload module" https://www.nginx.com/resources/wiki/modules/upload/
Given that, the upload is handled by the reverse proxy and only when the file is already on the file system, the backend (seaside in this case) would get a notification request.
I plan to implement this within the next 6 weeks, so if I get going something usable, I'll probably hand it back to the seaside community :) Remind me if I forget ;)
best regards -Tobias
On 18.01.2016, at 18:00, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content...
Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it.
No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads.
Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely.
I would be great to create the upload receiving part also with Pharo at least.
All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5.
I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.)
This is the scenario.
Robert
Sven Van Caekenberghe sven@stfx.eu ezt írta (időpont: 2016. jan. 18., H, 16:30): Robert,
This is not such an easy problem, you have to really understand HTTP.
BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ?
Now, here is the key idea (pure Zn, no Seaside, quick hack):
(ZnServer startOn: 1701) reader: [ :stream | ZnRequest readStreamingFrom: stream ]; maximumEntitySize: 100*1024*1024; onRequestRespond: [ :req | '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | out binary. ZnUtils streamFrom: req entity stream to: out ]. ZnResponse ok: (ZnEntity text: 'done') ]; yourself.
You would use it like this:
$ echo one two three > data.bin $ curl -X POST -d @data.bin http://localhost:1701 $ cat /tmp/upload.bin one two three
With a 1Mb data file generated from Pharo:
'/tmp/data.txt' asFileReference writeStreamDo: [ :out | 1 * 1024 timesRepeat: [ 1 to: 32 do: [ :each | out << Character alphabet << (each printStringPadded: 5); lf ] ] ]
$ curl -v -X POST --data-binary @data2.bin http://localhost:1701
- Rebuilt URL to: http://localhost:1701/
- Trying ::1...
- connect to ::1 port 1701 failed: Connection refused
- Trying 127.0.0.1...
- Connected to localhost (127.0.0.1) port 1701 (#0)
POST / HTTP/1.1 Host: localhost:1701 User-Agent: curl/7.43.0 Accept: */* Content-Length: 1048576 Content-Type: application/x-www-form-urlencoded Expect: 100-continue
- Done waiting for 100-continue
- We are completely uploaded and fine
< HTTP/1.1 200 OK < Content-Type: text/plain;charset=utf-8 < Content-Length: 4 < Date: Mon, 18 Jan 2016 14:56:53 GMT < Server: Zinc HTTP Components 1.0 <
- Connection #0 to host localhost left intact
done
$ diff data2.bin /tmp/upload.bin
This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly.
Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside.
I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue.
That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ?
Maybe all that is needed is giving Pharo more memory. What platform are you on ?
Sven
On 18 Jan 2016, at 13:39, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the comments. I understand all.
Could you please clarify this:
"Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory."
Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation?
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
thanks Robert
2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu:
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
On 18 Jan 2016, at 19:19, Tobias Pape Das.Linux@gmx.de wrote:
Oh that would be just great. My students would rejoice to no longer see the spurious "503 gateway timed out" messaged ;)
Best regards -Tobias
Are you referring to file uploads on SS3? There, the file is stored inside of Gemstone db, right?
Johan
On 18.01.2016, at 19:24, Johan Brichau johan@inceptive.be wrote:
On 18 Jan 2016, at 19:19, Tobias Pape Das.Linux@gmx.de wrote:
Oh that would be just great. My students would rejoice to no longer see the spurious "503 gateway timed out" messaged ;)
Best regards -Tobias
Are you referring to file uploads on SS3?
It is not the SS3, it is another system :)
There, the file is stored inside of Gemstone db, right?
For ss3 on GS, thats right. It actually still works well that way, but we don't have many files over 6MB, and more and more packages are over at github so I don't see any need to implement upload improvements there.
On the other system, however, we hat students trying to upload some 300 MB files. In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire to use nginx upload.
Best regards -Tobias
Johan _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
On 18 Jan 2016, at 19:34, Tobias Pape Das.Linux@gmx.de wrote:
For ss3 on GS, thats right. It actually still works well that way, but we don't have many files over 6MB, and more and more packages are over at github so I don't see any need to implement upload improvements there.
True. I was just going to say it would not really help there since you need the file inside the db anyway.
On the other system, however, we hat students trying to upload some 300 MB files. In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire to use nginx upload.
Ok, I’m currently heads-down in some other things, so let me take a look tomorrow on this. This will work for Robert as well, of course :)
Johan
On 18.01.2016, at 19:43, Johan Brichau johan@inceptive.be wrote:
On 18 Jan 2016, at 19:34, Tobias Pape Das.Linux@gmx.de wrote:
For ss3 on GS, thats right. It actually still works well that way, but we don't have many files over 6MB, and more and more packages are over at github so I don't see any need to implement upload improvements there.
True. I was just going to say it would not really help there since you need the file inside the db anyway.
On the other system, however, we hat students trying to upload some 300 MB files. In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire to use nginx upload.
Ok, I’m currently heads-down in some other things, so let me take a look tomorrow on this. This will work for Robert as well, of course :)
Take your time :) And, thank you.
Best regards -Tobias
Johan _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Actually,
I was quickly trying to find Nick’s blog post on the wayback machine but it’s not archived :(
I did find this: http://www.squeaksource.com/fileupload/
I did not check what’s in there but until I can take a look, here it is already ;)
cheers Johan
On 18 Jan 2016, at 19:45, Tobias Pape Das.Linux@gmx.de wrote:
On 18.01.2016, at 19:43, Johan Brichau <johan@inceptive.be mailto:johan@inceptive.be> wrote:
On 18 Jan 2016, at 19:34, Tobias Pape Das.Linux@gmx.de wrote:
For ss3 on GS, thats right. It actually still works well that way, but we don't have many files over 6MB, and more and more packages are over at github so I don't see any need to implement upload improvements there.
True. I was just going to say it would not really help there since you need the file inside the db anyway.
On the other system, however, we hat students trying to upload some 300 MB files. In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire to use nginx upload.
Ok, I’m currently heads-down in some other things, so let me take a look tomorrow on this. This will work for Robert as well, of course :)
Take your time :) And, thank you.
Best regards -Tobias
Johan _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org mailto:seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org mailto:seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
On 18.01.2016, at 19:58, Johan Brichau johan@inceptive.be wrote:
Actually,
I was quickly trying to find Nick’s blog post on the wayback machine but it’s not archived :(
I did find this: http://www.squeaksource.com/fileupload/
I did not check what’s in there but until I can take a look, here it is already ;)
:D
cheers Johan
On 18 Jan 2016, at 19:45, Tobias Pape Das.Linux@gmx.de wrote:
On 18.01.2016, at 19:43, Johan Brichau johan@inceptive.be wrote:
On 18 Jan 2016, at 19:34, Tobias Pape Das.Linux@gmx.de wrote:
For ss3 on GS, thats right. It actually still works well that way, but we don't have many files over 6MB, and more and more packages are over at github so I don't see any need to implement upload improvements there.
True. I was just going to say it would not really help there since you need the file inside the db anyway.
On the other system, however, we hat students trying to upload some 300 MB files. In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire to use nginx upload.
Ok, I’m currently heads-down in some other things, so let me take a look tomorrow on this. This will work for Robert as well, of course :)
Take your time :) And, thank you.
Best regards -Tobias
Johan _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Hi Tobias, all,
I just added a first version Seaside-ExternalFileUpload package to the Seaside32 repository and currently writing up a blog post detailing the nginx configuration and all other steps for it.
If I still have time tonight, I should have a first draft of the post online.
Btw, we also have Ajax file uploads in Seaside 3.2
On 18 Jan 2016, at 20:30, Tobias Pape Das.Linux@gmx.de wrote:
On 18.01.2016, at 19:58, Johan Brichau <johan@inceptive.be mailto:johan@inceptive.be> wrote:
Actually,
I was quickly trying to find Nick’s blog post on the wayback machine but it’s not archived :(
I did find this: http://www.squeaksource.com/fileupload/ http://www.squeaksource.com/fileupload/
I did not check what’s in there but until I can take a look, here it is already ;)
:D
cheers Johan
On 18 Jan 2016, at 19:45, Tobias Pape Das.Linux@gmx.de wrote:
On 18.01.2016, at 19:43, Johan Brichau johan@inceptive.be wrote:
On 18 Jan 2016, at 19:34, Tobias Pape Das.Linux@gmx.de wrote:
For ss3 on GS, thats right. It actually still works well that way, but we don't have many files over 6MB, and more and more packages are over at github so I don't see any need to implement upload improvements there.
True. I was just going to say it would not really help there since you need the file inside the db anyway.
On the other system, however, we hat students trying to upload some 300 MB files. In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire to use nginx upload.
Ok, I’m currently heads-down in some other things, so let me take a look tomorrow on this. This will work for Robert as well, of course :)
Take your time :) And, thank you.
Best regards -Tobias
Johan _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org mailto:seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org mailto:seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Hello Everyone!
I've tested the same on Win/Apache as promised. I was able to put the mod_upload to work but it produced weird result. I was able to catch the raw information coming from the Apache filter. But on the other side: the uploaded file in the temp directory was bad... I may need more time with Apache Upload to see what it produces.
Best regards Robert
Johan Brichau johan@inceptive.be ezt írta (időpont: 2016. jan. 31., V, 18:30):
Hi Tobias, all,
I just added a first version Seaside-ExternalFileUpload package to the Seaside32 repository and currently writing up a blog post detailing the nginx configuration and all other steps for it.
If I still have time tonight, I should have a first draft of the post online.
Btw, we also have Ajax file uploads in Seaside 3.2
On 18 Jan 2016, at 20:30, Tobias Pape Das.Linux@gmx.de wrote:
On 18.01.2016, at 19:58, Johan Brichau johan@inceptive.be wrote:
Actually,
I was quickly trying to find Nick’s blog post on the wayback machine but it’s not archived :(
I did find this: http://www.squeaksource.com/fileupload/
I did not check what’s in there but until I can take a look, here it is already ;)
:D
cheers Johan
On 18 Jan 2016, at 19:45, Tobias Pape Das.Linux@gmx.de wrote:
On 18.01.2016, at 19:43, Johan Brichau johan@inceptive.be wrote:
On 18 Jan 2016, at 19:34, Tobias Pape Das.Linux@gmx.de wrote:
For ss3 on GS, thats right. It actually still works well that way, but we don't have many files over 6MB, and more and more packages are over at github so I don't see any need to implement upload improvements there.
True. I was just going to say it would not really help there since you need the file inside the db anyway.
On the other system, however, we hat students trying to upload some 300 MB files. In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire to use nginx upload.
Ok, I’m currently heads-down in some other things, so let me take a look tomorrow on this. This will work for Robert as well, of course :)
Take your time :) And, thank you.
Best regards -Tobias
Johan _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Hey there,
I did a round-up of the code and my text, so everything is here: http://jbrichau.github.io/blog/large-file-upload-in-seaside http://jbrichau.github.io/blog/large-file-upload-in-seaside
cheers Johan
On 31 Jan 2016, at 18:30, Johan Brichau johan@inceptive.be wrote:
Hi Tobias, all,
I just added a first version Seaside-ExternalFileUpload package to the Seaside32 repository and currently writing up a blog post detailing the nginx configuration and all other steps for it.
If I still have time tonight, I should have a first draft of the post online.
Btw, we also have Ajax file uploads in Seaside 3.2
On 18 Jan 2016, at 20:30, Tobias Pape <Das.Linux@gmx.de mailto:Das.Linux@gmx.de> wrote:
On 18.01.2016, at 19:58, Johan Brichau <johan@inceptive.be mailto:johan@inceptive.be> wrote:
Actually,
I was quickly trying to find Nick’s blog post on the wayback machine but it’s not archived :(
I did find this: http://www.squeaksource.com/fileupload/ http://www.squeaksource.com/fileupload/
I did not check what’s in there but until I can take a look, here it is already ;)
:D
cheers Johan
On 18 Jan 2016, at 19:45, Tobias Pape <Das.Linux@gmx.de mailto:Das.Linux@gmx.de> wrote:
On 18.01.2016, at 19:43, Johan Brichau <johan@inceptive.be mailto:johan@inceptive.be> wrote:
On 18 Jan 2016, at 19:34, Tobias Pape <Das.Linux@gmx.de mailto:Das.Linux@gmx.de> wrote:
For ss3 on GS, thats right. It actually still works well that way, but we don't have many files over 6MB, and more and more packages are over at github so I don't see any need to implement upload improvements there.
True. I was just going to say it would not really help there since you need the file inside the db anyway.
On the other system, however, we hat students trying to upload some 300 MB files. In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire to use nginx upload.
Ok, I’m currently heads-down in some other things, so let me take a look tomorrow on this. This will work for Robert as well, of course :)
Take your time :) And, thank you.
Best regards -Tobias
Johan _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org mailto:seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org mailto:seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org mailto:seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org mailto:seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
On 05 Feb 2016, at 09:07, Johan Brichau johan@inceptive.be wrote:
Hey there,
I did a round-up of the code and my text, so everything is here: http://jbrichau.github.io/blog/large-file-upload-in-seaside
Thanks a lot, Johan, very nice write up.
One can only imagine what would happen, what we would learn, if you blogged more than once every two years ;-)
Sven
cheers Johan
On 31 Jan 2016, at 18:30, Johan Brichau johan@inceptive.be wrote:
Hi Tobias, all,
I just added a first version Seaside-ExternalFileUpload package to the Seaside32 repository and currently writing up a blog post detailing the nginx configuration and all other steps for it.
If I still have time tonight, I should have a first draft of the post online.
Btw, we also have Ajax file uploads in Seaside 3.2
On 18 Jan 2016, at 20:30, Tobias Pape Das.Linux@gmx.de wrote:
On 18.01.2016, at 19:58, Johan Brichau johan@inceptive.be wrote:
Actually,
I was quickly trying to find Nick’s blog post on the wayback machine but it’s not archived :(
I did find this: http://www.squeaksource.com/fileupload/
I did not check what’s in there but until I can take a look, here it is already ;)
:D
cheers Johan
On 18 Jan 2016, at 19:45, Tobias Pape Das.Linux@gmx.de wrote:
On 18.01.2016, at 19:43, Johan Brichau johan@inceptive.be wrote:
> On 18 Jan 2016, at 19:34, Tobias Pape Das.Linux@gmx.de wrote: > > For ss3 on GS, thats right. It actually still works well that way, but we don't have many files > over 6MB, and more and more packages are over at github so I don't see any need to > implement upload improvements there.
True. I was just going to say it would not really help there since you need the file inside the db anyway.
> On the other system, however, we hat students trying to upload some 300 MB files. > In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire > to use nginx upload.
Ok, I’m currently heads-down in some other things, so let me take a look tomorrow on this. This will work for Robert as well, of course :)
Take your time :) And, thank you.
Best regards -Tobias
Johan _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
On 05 Feb 2016, at 10:11, Sven Van Caekenberghe sven@stfx.eu wrote:
One can only imagine what would happen, what we would learn, if you blogged more than once every two years ;-)
I probably would need to write a lot less interesting stuff :) Thanks for the appreciation.
Before promising to blog more, I really want to get a new Seaside website out of the door. Next, we need all the help we can get updating the documentation.
cheers Johan
Hmmm. Putting upload one step out in the frontline..
I ask my office also about nginx and also test install for myself.
Anyway, a pure pharo streaming solution is still interesting for me.
R 2016.01.18. 19:20 ezt írta ("Tobias Pape" Das.Linux@gmx.de):
Hi Johan On 18.01.2016, at 19:18, Johan Brichau johan@inceptive.be wrote:
Hi Tobias,
This is what we do since years :) There was a blog post online describing all the details but I don’t find
it anymore.
The only think I can find is Nick Ager’s reply when we had a little
trouble setting it up [1]
I might try to separate off the code to spare you some time. It’s quite simple, actually.
Oh that would be just great. My students would rejoice to no longer see the spurious "503 gateway timed out" messaged ;)
Best regards -Tobias
[1] http://forum.world.st/Using-nginx-file-upload-module-td3591666.html
Johan
On 18 Jan 2016, at 18:56, Tobias Pape Das.Linux@gmx.de wrote:
Hey all
just my 2ct while skimming the thread.
I have upload problems with my seaside app and plan to tackle them by utilizing the reverse proxy. In my scenario, that is nginx, wich ships an "upload module" https://www.nginx.com/resources/wiki/modules/upload/
Given that, the upload is handled by the reverse proxy and only when the file is already on the file system, the backend (seaside in this
case)
would get a notification request.
I plan to implement this within the next 6 weeks, so if I get going
something
usable, I'll probably hand it back to the seaside community :) Remind me if I forget ;)
best regards -Tobias
On 18.01.2016, at 18:00, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the demo. Zn without Seaside is just fine if it could work.
A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content...
Usage: An office need to receive large map / digital survey data files
from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it.
No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure
browser upload as clients know this from their everyday life. And they could also add metadata about their uploads.
Login, auth to existing client database is done in Seaside/Pharo in a
few hours, works nicely.
I would be great to create the upload receiving part also with Pharo
at least.
All this stuff is behind and IIS/ARR - tested for large uploads,
worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5.
I'm developing on Linux and testing on Windows Server 2008 configured
to the same setup (IIS, ARR, etc.)
This is the scenario.
Robert
Sven Van Caekenberghe sven@stfx.eu ezt írta (időpont: 2016. jan.
18., H, 16:30):
Robert,
This is not such an easy problem, you have to really understand HTTP.
BTW, such huge uploads don't seem a very good idea anyway, you will
get annoying timeouts as well. I am curious, what is in those files ?
Now, here is the key idea (pure Zn, no Seaside, quick hack):
(ZnServer startOn: 1701) reader: [ :stream | ZnRequest readStreamingFrom: stream ]; maximumEntitySize: 100*1024*1024; onRequestRespond: [ :req | '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | out binary. ZnUtils streamFrom: req entity stream to: out ]. ZnResponse ok: (ZnEntity text: 'done') ]; yourself.
You would use it like this:
$ echo one two three > data.bin $ curl -X POST -d @data.bin http://localhost:1701 $ cat /tmp/upload.bin one two three
With a 1Mb data file generated from Pharo:
'/tmp/data.txt' asFileReference writeStreamDo: [ :out | 1 * 1024 timesRepeat: [ 1 to: 32 do: [ :each | out << Character alphabet << (each printStringPadded: 5); lf ] ] ]
$ curl -v -X POST --data-binary @data2.bin http://localhost:1701
- Rebuilt URL to: http://localhost:1701/
- Trying ::1...
- connect to ::1 port 1701 failed: Connection refused
- Trying 127.0.0.1...
- Connected to localhost (127.0.0.1) port 1701 (#0)
POST / HTTP/1.1 Host: localhost:1701 User-Agent: curl/7.43.0 Accept: */* Content-Length: 1048576 Content-Type: application/x-www-form-urlencoded Expect: 100-continue
- Done waiting for 100-continue
- We are completely uploaded and fine
< HTTP/1.1 200 OK < Content-Type: text/plain;charset=utf-8 < Content-Length: 4 < Date: Mon, 18 Jan 2016 14:56:53 GMT < Server: Zinc HTTP Components 1.0 <
- Connection #0 to host localhost left intact
done
$ diff data2.bin /tmp/upload.bin
This code is totally incomplete, you need lots of error handling.
Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly.
Also, if you want an upload in a form, you will have to parse that
form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside.
I also tried with 100Mb, it worked, but it took several minutes, like
10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue.
That is why I asked what is in the files, what you eventually want to
do with it. Is the next processing step in Pharo too ?
Maybe all that is needed is giving Pharo more memory. What platform
are you on ?
Sven
On 18 Jan 2016, at 13:39, Robert Kuszinger kuszinger@giscom.hu
wrote:
Sven,
thanks for the comments. I understand all.
Could you please clarify this:
"Technically, it would be possible to write a Zn handler that can
accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory."
Is there a chance to create a streaming solution? Is it documented
somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation?
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB
upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
thanks Robert
2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu:
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Hi Johan,
Thanks for your blog post.
I found Nick Ager’s blog posts regarding Nginx: http://nickager.com/tags/#nginx
You might want to link to them from yours, maybe?
Cheers, Bernhard
Am 19.01.2016 um 05:18 schrieb Johan Brichau johan@inceptive.be:
Hi Tobias,
This is what we do since years :) There was a blog post online describing all the details but I don’t find it anymore. The only think I can find is Nick Ager’s reply when we had a little trouble setting it up [1]
I might try to separate off the code to spare you some time. It’s quite simple, actually.
[1] http://forum.world.st/Using-nginx-file-upload-module-td3591666.html
Johan
On 18 Jan 2016, at 18:56, Tobias Pape Das.Linux@gmx.de wrote:
Hey all
just my 2ct while skimming the thread.
I have upload problems with my seaside app and plan to tackle them by utilizing the reverse proxy. In my scenario, that is nginx, wich ships an "upload module" https://www.nginx.com/resources/wiki/modules/upload/
Given that, the upload is handled by the reverse proxy and only when the file is already on the file system, the backend (seaside in this case) would get a notification request.
I plan to implement this within the next 6 weeks, so if I get going something usable, I'll probably hand it back to the seaside community :) Remind me if I forget ;)
best regards -Tobias
On 18.01.2016, at 18:00, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content...
Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it.
No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads.
Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely.
I would be great to create the upload receiving part also with Pharo at least.
All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5.
I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.)
This is the scenario.
Robert
Sven Van Caekenberghe sven@stfx.eu ezt írta (időpont: 2016. jan. 18., H, 16:30): Robert,
This is not such an easy problem, you have to really understand HTTP.
BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ?
Now, here is the key idea (pure Zn, no Seaside, quick hack):
(ZnServer startOn: 1701) reader: [ :stream | ZnRequest readStreamingFrom: stream ]; maximumEntitySize: 100*1024*1024; onRequestRespond: [ :req | '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | out binary. ZnUtils streamFrom: req entity stream to: out ]. ZnResponse ok: (ZnEntity text: 'done') ]; yourself.
You would use it like this:
$ echo one two three > data.bin $ curl -X POST -d @data.bin http://localhost:1701 $ cat /tmp/upload.bin one two three
With a 1Mb data file generated from Pharo:
'/tmp/data.txt' asFileReference writeStreamDo: [ :out | 1 * 1024 timesRepeat: [ 1 to: 32 do: [ :each | out << Character alphabet << (each printStringPadded: 5); lf ] ] ]
$ curl -v -X POST --data-binary @data2.bin http://localhost:1701
- Rebuilt URL to: http://localhost:1701/
- Trying ::1...
- connect to ::1 port 1701 failed: Connection refused
- Trying 127.0.0.1...
- Connected to localhost (127.0.0.1) port 1701 (#0)
POST / HTTP/1.1 Host: localhost:1701 User-Agent: curl/7.43.0 Accept: */* Content-Length: 1048576 Content-Type: application/x-www-form-urlencoded Expect: 100-continue
- Done waiting for 100-continue
- We are completely uploaded and fine
< HTTP/1.1 200 OK < Content-Type: text/plain;charset=utf-8 < Content-Length: 4 < Date: Mon, 18 Jan 2016 14:56:53 GMT < Server: Zinc HTTP Components 1.0 <
- Connection #0 to host localhost left intact
done
$ diff data2.bin /tmp/upload.bin
This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly.
Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside.
I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue.
That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ?
Maybe all that is needed is giving Pharo more memory. What platform are you on ?
Sven
On 18 Jan 2016, at 13:39, Robert Kuszinger kuszinger@giscom.hu wrote:
Sven,
thanks for the comments. I understand all.
Could you please clarify this:
"Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory."
Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation?
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
thanks Robert
2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe sven@stfx.eu:
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
On 27-03-16 07:37, Bernhard Pieber wrote:
Hi Johan,
Thanks for your blog post.
I found Nick Ager’s blog posts regarding Nginx: http://nickager.com/tags/#nginx
You might want to link to them from yours, maybe?
Yes, Nick Ager recently restarted his blog
Stephan
On 01/18/2016 06:00 PM, Robert Kuszinger wrote:
*Usage*: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it.
Interesting, I solved this bit in Kom waaay back:
http://forum.world.st/Final-try-for-Kom-Seaside-file-upload-tuning-td98355.h...
regards, Göran
On Mon, 2016-01-18 at 13:39 +0100, Robert Kuszinger wrote:
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
Assuming you're running the Cog VM you're likely hitting its memory limits. I don't recall the exact number, but once you get to the 400- 500 Meg range you're hitting the absolute limit of how much RAM Cog can deal with. Also as you've noticed, once you get over about 300 Meg things will start to slow down rather dramatically. From what I've read Spur roughly doubles the amount of RAM that the VM can work with and should perform much better at large image sizes. You might want to consider handling large file uploads like this outside of the image (i.e. still have your front end in Seaside but handle the actual upload via an external mechanism)
thanks Robert
On 18/01/16 21:04, Phil (list) wrote:
On Mon, 2016-01-18 at 13:39 +0100, Robert Kuszinger wrote:
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
Assuming you're running the Cog VM you're likely hitting its memory limits. I don't recall the exact number, but once you get to the 400- 500 Meg range you're hitting the absolute limit of how much RAM Cog can deal with.
That is just default limits. On a mac I've worked with about 2GB. There used to be some limitation on windows, I think there was an issue in 2011 on windows where there was a limit closer to 512 GB, but AFAIk that was fixed.
Stephan
On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote:
On 18/01/16 21:04, Phil (list) wrote:
On Mon, 2016-01-18 at 13:39 +0100, Robert Kuszinger wrote:
Answering on how it goes:
20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
Assuming you're running the Cog VM you're likely hitting its memory limits. I don't recall the exact number, but once you get to the 400- 500 Meg range you're hitting the absolute limit of how much RAM Cog can deal with.
That is just default limits. On a mac I've worked with about 2GB. There used to be some limitation on windows, I think there was an issue in 2011 on windows where there was a limit closer to 512 GB, but AFAIk that was fixed.
Is that something that can be changed without a custom build? If so, I'd love to learn how. I was under the impression that this was a hard limit in Cog (that varies a bit by platform, but still well below 1G)
Stephan
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Phil (list) wrote
On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote:
Assuming you're running the Cog VM you're likely hitting its memory limits. I don't recall the exact number, but once you get to the 400- 500 Meg range you're hitting the absolute limit of how much RAM Cog can deal with.
That is just default limits. On a mac I've worked with about 2GB. There used to be some limitation on windows, I think there was an issue in 2011 on windows where there was a limit closer to 512 GB, but AFAIk that was fixed.
Is that something that can be changed without a custom build? If so, I'd love to learn how. I was under the impression that this was a hard limit in Cog (that varies a bit by platform, but still well below 1G)
On the mac you can change the limit in
<my-vm-dir>/Pharo.app/Contents/Info.plist
by adjusting the value for the SqueakMaxHeapSize setting and restarting the image.
-- View this message in context: http://forum.world.st/Re-ZINC-Kom-dilemma-Fwd-WADebugErrorHandler-problem-tp... Sent from the Seaside General mailing list archive at Nabble.com.
Paul, Phil,
thanks for the ideas and information. I'll also try this way also to ensure every factor to be optimal. Summarizing all the comments by far it seems that the bulletproof service infrastructure is streaming upload to disk or possibly outside the Smalltalk VM (nginx way) and keep other application data inside. Memory limits are still interesting for safe handling of a larger parallel load.
thanks R
Paul DeBruicker pdebruic@gmail.com ezt írta (időpont: 2016. jan. 19., K, 6:48):
Phil (list) wrote
On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote:
Assuming you're running the Cog VM you're likely hitting its memory limits. I don't recall the exact number, but once you get to the 400- 500 Meg range you're hitting the absolute limit of how much RAM Cog can deal with.
That is just default limits. On a mac I've worked with about 2GB. There used to be some limitation on windows, I think there was an issue in 2011 on windows where there was a limit closer to 512 GB, but AFAIk that was fixed.
Is that something that can be changed without a custom build? If so, I'd love to learn how. I was under the impression that this was a hard limit in Cog (that varies a bit by platform, but still well below 1G)
On the mac you can change the limit in
<my-vm-dir>/Pharo.app/Contents/Info.plist
by adjusting the value for the SqueakMaxHeapSize setting and restarting the image.
-- View this message in context: http://forum.world.st/Re-ZINC-Kom-dilemma-Fwd-WADebugErrorHandler-problem-tp... Sent from the Seaside General mailing list archive at Nabble.com. _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Hi Everyone,
Just for information: it seems that there is *no nginx with upload module* on Windows. However in Apache doc there is also an upload providing fairly the same. I'm now testing it on Windows and if it works I follow with the Smalltalk ending in my Seaside app.
regards Robert
Robert Kuszinger kuszinger@giscom.hu ezt írta (időpont: 2016. jan. 19., K, 8:40):
Paul, Phil,
thanks for the ideas and information. I'll also try this way also to ensure every factor to be optimal. Summarizing all the comments by far it seems that the bulletproof service infrastructure is streaming upload to disk or possibly outside the Smalltalk VM (nginx way) and keep other application data inside. Memory limits are still interesting for safe handling of a larger parallel load.
thanks R
Paul DeBruicker pdebruic@gmail.com ezt írta (időpont: 2016. jan. 19., K, 6:48):
Phil (list) wrote
On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote:
Assuming you're running the Cog VM you're likely hitting its memory limits. I don't recall the exact number, but once you get to the 400- 500 Meg range you're hitting the absolute limit of how much RAM Cog can deal with.
That is just default limits. On a mac I've worked with about 2GB. There used to be some limitation on windows, I think there was an issue in 2011 on windows where there was a limit closer to 512 GB, but AFAIk that was fixed.
Is that something that can be changed without a custom build? If so, I'd love to learn how. I was under the impression that this was a hard limit in Cog (that varies a bit by platform, but still well below 1G)
On the mac you can change the limit in
<my-vm-dir>/Pharo.app/Contents/Info.plist
by adjusting the value for the SqueakMaxHeapSize setting and restarting the image.
-- View this message in context: http://forum.world.st/Re-ZINC-Kom-dilemma-Fwd-WADebugErrorHandler-problem-tp... Sent from the Seaside General mailing list archive at Nabble.com. _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Hi Robert,
I was fascinated by the idea of having apache / nginx leave Seaside alone during a file uload until it is finished and tried to find out if something like that is available for Apache. I couldn't find such a thing, but I am sure our Kontolino App could benefit a lot from it, both CPU and memory-wise.
So I'd be grateful if you could provide a link to what you found...
Joachim
Am 19.01.16 um 15:15 schrieb Robert Kuszinger:
Hi Everyone,
Just for information: it seems that there is *no nginx with upload module* on Windows. However in Apache doc there is also an upload providing fairly the same. I'm now testing it on Windows and if it works I follow with the Smalltalk ending in my Seaside app.
regards Robert
Robert Kuszinger <kuszinger@giscom.hu mailto:kuszinger@giscom.hu> ezt írta (időpont: 2016. jan. 19., K, 8:40):
Paul, Phil, thanks for the ideas and information. I'll also try this way also to ensure every factor to be optimal. Summarizing all the comments by far it seems that the bulletproof service infrastructure is streaming upload to disk or possibly outside the Smalltalk VM (nginx way) and keep other application data inside. Memory limits are still interesting for safe handling of a larger parallel load. thanks R Paul DeBruicker <pdebruic@gmail.com <mailto:pdebruic@gmail.com>> ezt írta (időpont: 2016. jan. 19., K, 6:48): Phil (list) wrote > On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote: > >> > Assuming you're running the Cog VM you're likely hitting its memory >> > limits. I don't recall the exact number, but once you get to the >> > 400- >> > 500 Meg range you're hitting the absolute limit of how much RAM Cog >> > can >> > deal with. >> >> That is just default limits. On a mac I've worked with about 2GB. >> There >> used to be some limitation on windows, I think there was an issue in >> 2011 on windows where there was a limit closer to 512 GB, but AFAIk >> that >> was fixed. >> > > Is that something that can be changed without a custom build? If so, > I'd love to learn how. I was under the impression that this was a hard > limit in Cog (that varies a bit by platform, but still well below 1G) On the mac you can change the limit in <my-vm-dir>/Pharo.app/Contents/Info.plist by adjusting the value for the SqueakMaxHeapSize setting and restarting the image. -- View this message in context: http://forum.world.st/Re-ZINC-Kom-dilemma-Fwd-WADebugErrorHandler-problem-tp4872293p4872528.html Sent from the Seaside General mailing list archive at Nabble.com. _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org <mailto:seaside@lists.squeakfoundation.org> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Robert,
so I took another attempt and found this: http://apache.webthing.com/mod_upload/
Is this what you referred to?
Joachim
Am 19.01.16 um 15:21 schrieb jtuchel@objektfabrik.de:
Hi Robert,
I was fascinated by the idea of having apache / nginx leave Seaside alone during a file uload until it is finished and tried to find out if something like that is available for Apache. I couldn't find such a thing, but I am sure our Kontolino App could benefit a lot from it, both CPU and memory-wise.
So I'd be grateful if you could provide a link to what you found...
Joachim
Am 19.01.16 um 15:15 schrieb Robert Kuszinger:
Hi Everyone,
Just for information: it seems that there is *no nginx with upload module* on Windows. However in Apache doc there is also an upload providing fairly the same. I'm now testing it on Windows and if it works I follow with the Smalltalk ending in my Seaside app.
regards Robert
Robert Kuszinger <kuszinger@giscom.hu mailto:kuszinger@giscom.hu> ezt írta (időpont: 2016. jan. 19., K, 8:40):
Paul, Phil, thanks for the ideas and information. I'll also try this way also to ensure every factor to be optimal. Summarizing all the comments by far it seems that the bulletproof service infrastructure is streaming upload to disk or possibly outside the Smalltalk VM (nginx way) and keep other application data inside. Memory limits are still interesting for safe handling of a larger parallel load. thanks R Paul DeBruicker <pdebruic@gmail.com <mailto:pdebruic@gmail.com>> ezt írta (időpont: 2016. jan. 19., K, 6:48): Phil (list) wrote > On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote: > >> > Assuming you're running the Cog VM you're likely hitting its memory >> > limits. I don't recall the exact number, but once you get to the >> > 400- >> > 500 Meg range you're hitting the absolute limit of how much RAM Cog >> > can >> > deal with. >> >> That is just default limits. On a mac I've worked with about 2GB. >> There >> used to be some limitation on windows, I think there was an issue in >> 2011 on windows where there was a limit closer to 512 GB, but AFAIk >> that >> was fixed. >> > > Is that something that can be changed without a custom build? If so, > I'd love to learn how. I was under the impression that this was a hard > limit in Cog (that varies a bit by platform, but still well below 1G) On the mac you can change the limit in <my-vm-dir>/Pharo.app/Contents/Info.plist by adjusting the value for the SqueakMaxHeapSize setting and restarting the image. -- View this message in context: http://forum.world.st/Re-ZINC-Kom-dilemma-Fwd-WADebugErrorHandler-problem-tp4872293p4872528.html Sent from the Seaside General mailing list archive at Nabble.com. _______________________________________________ seaside mailing list seaside@lists.squeakfoundation.org <mailto:seaside@lists.squeakfoundation.org> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
--
Objektfabrik Joachim Tuchelmailto:jtuchel@objektfabrik.de Fliederweg 1http://www.objektfabrik.de D-71640 Ludwigsburghttp://joachimtuchel.wordpress.com Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1
seaside mailing list seaside@lists.squeakfoundation.org http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
On 19-01-16 15:15, Robert Kuszinger wrote:
Hi Everyone,
Just for information: it seems that there is *no nginx with upload module* on Windows. However in Apache doc there is also an upload providing fairly the same. I'm now testing it on Windows and if it works I follow with the Smalltalk ending in my Seaside app.
I had to compile it myself on linux, but that is a whileago.
Stephan
seaside@lists.squeakfoundation.org