[Seaside] Re: Seaside and REST

Klaus D. Witzel klaus.witzel at cobss.com
Thu Mar 29 21:03:26 UTC 2007

Hi Andreas,

on Thu, 29 Mar 2007 19:38:49 +0200, you wrote:
> Hi Klaus -
> Klaus D. Witzel wrote:
>> Besides of that, the counter in the Seaside counter example is *not*  
>> stored (as would be suggested by POST) but it is incremented. Doing  
>> incremental changes to a living object is not addressed by any HTTP  
>> request method ;-) For example, all WebDAV resources and Web2Mail  
>> scripts are considered to be dead (in the sense of a stateless, always  
>> repeatable request+response scenario).
> Which indeed they are. Buy my point about the counter example went of  
> course a little deeper. I can see the counter as stateless merely by  
> assuming that we have a platonic space of integer numbers where plus and  
> minus are retrievals ;-) However, that gets really hard when we get to  
> persistent state that can't be undone easily.

NP, undo is only a local phenomenon and has nothing to with HTTP </grin>

- http://www.google.com/search?q=undo+undone+site:www.w3.org

> Or when one deals with files directly. Does Seaside have "special rules"  
> for making such modifications or are all of these presented as GET  
> nevertheless?

Dunno. Seasiders?

>> Another illustrating use case is HTTP-tunneling. What method SHOULD  
>> they NOT use, POST or GET? IIRC they use both and the choice depends on  
>> what method allows *huge* amounts of bytes transfered upstream (POST)  
>> and what not (GET).
>>  [this is just from experience, no offense intended.]
> None taken. That is a great question. What is "best practice" these  
> days? Just use whatever you feel like? Whatever works most efficiently?

There are only two: crawlers (and bookmarked URLs, as mentioned by others  
already. both have the same "don't state me in" requirements) v.s.  
upstream request method capabilities and requirements - ignoring WebDAV  
(as usual :)

FWIW upstream request method capabilities may also be constrained by the  
HTML author's taste and abilities, when for example using forms and their  
elements (may be circumvented by XMLHttpRequest-ing).

And because of so many ( :) web "servers"  buffer overflow vulnerabilities  
during request processing, GET requests nowadays are very limited [also:  
beware of the proxying middle man]. Needless to say, the restrictions make  
GET come closer to what W3 wants us to do with GET and all the files on  
our web server ;-)

>>> These methods ought to be considered "safe". This allows user agents  
>>> to represent other methods, such as POST, PUT and DELETE, in a special  
>>> way, so that the user is made aware of the fact that a possibly unsafe  
>>> action is being requested."
>>  Then, how would you access a constantly changing "document" resource  
>> like a Croquet application running elsewhere, with HTTP. HTTP request  
>> methods where invented with "a resource is a file and the version and  
>> quality of the file's content can be negotiated by a HTTP method" in  
>> mind, IMO.
> True, but for example, in Croquet we have that distinction constantly.  
> We have, for example, a method #get: in code which (surprise!) is a pure  
> retrieval, idempotent and safe ;-) And then we have future messages  
> (think: POST) which modify the resource (object). And it makes perfect  
> sense both conceptually as well as from a practical point of view.

I knew I was talking to an expert, Andreas :)


> Cheers,
>    - Andreas

More information about the seaside mailing list