[Seaside] bad request
otto at finworks.biz
Thu Dec 26 11:05:25 UTC 2013
Thanks for the response.
>> Vulnerability tests that ran against our site showed that URL's with
>> percentage encoded UTF8 characters creates a 500 (internal server
>> error) response, where I think they should actually respond with 404
>> or 405.
> I think in this very specific case 400 (bad request would be the way to go).
Yes, agree. Not 405.
>> It creates a request (self requestFor: aNativeRequest) without
>> handling exceptions. This means that the outer exception handler
>> catches it and returns a 500.
> So right now FastCGI already turns it into a 500?
Yes, the GS one does
> There could be all sorts of things that could go wrong causing an
> exception. The http version for example could be "A.B", cookie parsing
> could fail and URL decoding could fail .
Yes, agree. Parsing the UTF-8 encoded characters in the URL is one.
Looking at rfc6265, I could not specifically pick up appropriate
responses if a cookie could not be parsed (or any other header field /
part of the header). One conclusion that I could find was that the
server should "degrade gracefully" when receiving invalid cookies. But
this is not necessarily the inability to parse the cookies in the
first place. And then a similar question arises for the parsing of
other header fields. Please let me know if you have better references
to read up on.
This is my best conclusion - please comment:
For someone providing a web application, the server implementer can either
1) be strict respond with 400 (bad request) if it finds something it
dislikes in the request (either the http method + URL, headers or
or 2) ignore inappropriate parts of the request. Of course, some parts
of the request can't be ignored, so one will have to distinguish
> Should the response generated in the exception case be a WAResponse or
> a "native" response? WAResponse response may be a bit tricky because
> you don't have WARequestContext because you don't have WARequest. So
> it's probably going to be "native" response, this also avoids the
> other issue of what you're going to do if response conversion
> (#responseFrom:) throws some kind of exception.
> So in the end all what you do is use a 400 instead of a 500 for some
> exception classes.
Ok, I understand some of the predicament. But I don't understand how I
would use 400 and not 500 in these cases.
A "native" request would be something that Comanche / Zinc / FastCGI
would create and pass to Seaside (i.e. WAServerAdaptor >> process:)
In the error case that the Seaside framework does not handle, the web
server / fast cgi server implements a fallback handler for server
errors (500 Internal Server Error). (e.g. HttpService >>
handleDispatchErrorsIn:). This is the error handler invoked in this
discussion where the URL contains percent-encoded parts that break the
It seems if I want to handle any of these errors (consistently), I
have to modify Comanche, Zinc and FastCGI adaptors.
I think we should allow the Seaside framework to handle these errors
as well. I understand giving the WAResponse the responsibility to
handle it can be tricky, but it may just be the right place.
Currently, it does handle some of the 400 range of errors (eg. not
This is my suggestion: Create the WARequestContext on the native
request without parsing it.
Parsing the URL effectively comes down to handleFiltered: on
WADispatcher that will try to get the url via the consumer on the
request. If parsing it breaks, the request is marked as a bad request.
The dispatcher could pass to the default handler. The parsing errors
of the other parts of the request would then go to the handler
dispatched to. And that could in turn be dispatched to an error
This gives the user of the Seaside framework the flexibility of
handling these errors elegantly as well. Only errors that break the
seaside framework itself, the ones that somehow fall through
WAExceptionHandler >> #handleExceptionsDuring: would then trigger the
Comanche / Zinc / FastCGI handler.
More information about the seaside