[Seaside] ZINC - Kom dilemma Fwd: WADebugErrorHandler problem

Sven Van Caekenberghe sven at stfx.eu
Mon Jan 18 15:30:47 UTC 2016


Robert,

This is not such an easy problem, you have to really understand HTTP.

BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ?

Now, here is the key idea (pure Zn, no Seaside, quick hack):

(ZnServer startOn: 1701)
  reader: [ :stream | ZnRequest readStreamingFrom: stream ];
  maximumEntitySize: 100*1024*1024;
  onRequestRespond: [ :req |
    '/tmp/upload.bin' asFileReference writeStreamDo: [ :out |
       out binary.
       ZnUtils streamFrom: req entity stream to: out ].
    ZnResponse ok: (ZnEntity text: 'done') ];
  yourself.

You would use it like this:

$ echo one two three > data.bin
$ curl -X POST -d @data.bin http://localhost:1701
$ cat /tmp/upload.bin 
one two three

With a 1Mb data file generated from Pharo:

'/tmp/data.txt' asFileReference writeStreamDo: [ :out |
  1 * 1024 timesRepeat: [ 
    1 to: 32 do: [ :each |
      out << Character alphabet << (each printStringPadded: 5); lf ] ] ]

$ curl -v -X POST --data-binary @data2.bin http://localhost:1701
* Rebuilt URL to: http://localhost:1701/
*   Trying ::1...
* connect to ::1 port 1701 failed: Connection refused
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 1701 (#0)
> POST / HTTP/1.1
> Host: localhost:1701
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 1048576
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
> 
* Done waiting for 100-continue
* We are completely uploaded and fine
< HTTP/1.1 200 OK
< Content-Type: text/plain;charset=utf-8
< Content-Length: 4
< Date: Mon, 18 Jan 2016 14:56:53 GMT
< Server: Zinc HTTP Components 1.0
< 
* Connection #0 to host localhost left intact
done

$ diff data2.bin /tmp/upload.bin 

This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly. 

Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside.

I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue.

That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ?

Maybe all that is needed is giving Pharo more memory. What platform are you on ?

Sven

> On 18 Jan 2016, at 13:39, Robert Kuszinger <kuszinger at giscom.hu> wrote:
> 
> Sven, 
> 
> thanks for the comments. I understand all.
> 
> Could you please clarify this:
> 
> "Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory."
> 
> Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation?
> 
> 
> Answering on how it goes:
> 
> 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm...
> 
> 
> thanks
> Robert
> 
> 
> 
> 
> 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe <sven at stfx.eu>:
> 
> > On 18 Jan 2016, at 13:17, Robert Kuszinger <kuszinger at giscom.hu> wrote:
> >
> > Sven,
> >
> > thanks for the tip. Actually the UTF problem won't appear with Zinc but I also need to do large file uploads. Zinc resets connection somewhere between 10-19 MB File size. Is there a built-in limitation?
> 
> Yes, one of several used by Zn to protect itself from resource abuse (DOS). The default limit is 16Mb.
> 
> > Anyway, an idea on how to break through this limit may also help. I need to upload 150-300 MB files regulary with this service. KomHttp did it out-of-the-box.
> 
> The limit you are running into is called #maximumEntitySize and can be set with ZnServer>>#maximumEntitySize: - The default limit is 16Mb.
> 
> Given an adaptor instance, you can access the server using #server.
> 
> So together this would be
> 
>   ZnZincServerAdaptor default server maximumEntitySize: 300*1024*1024.
> 
> Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory.
> 
> Let me know how that goes.
> 
> > thanks
> > Robert
> >
> >
> > 2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe <sven at stfx.eu>:
> > Robert,
> >
> > If you are using Pharo, it seems more logical that you would use ZnZincServerAdaptor.
> >
> > But of course, it might not necessarily have to do with the adaptor.
> >
> > HTH,
> >
> > Sven
> >
> > _______________________________________________
> > seaside mailing list
> > seaside at lists.squeakfoundation.org
> > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
> 
> _______________________________________________
> seaside mailing list
> seaside at lists.squeakfoundation.org
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
> 
> _______________________________________________
> seaside mailing list
> seaside at lists.squeakfoundation.org
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside



More information about the seaside mailing list