[Seaside] parse content

Eduard Poschinger elofachant at yahoo.de
Sun Feb 6 12:32:22 UTC 2011


That would be the second step. 

First, I would like to access it.
Something like ... getContentsOfUrl: anUrl, which returns a Stream or String (the whole html)

Does such a method exist?

--- Sebastian Sastre <sebastian at flowingconcept.com> schrieb am So, 6.2.2011:

Von: Sebastian Sastre <sebastian at flowingconcept.com>
Betreff: Re: [Seaside] parse content
An: "Seaside - general discussion" <seaside at lists.squeakfoundation.org>
Datum: Sonntag, 6. Februar, 2011 11:17 Uhr

if you want it parsed, take a look to HTML & CSS parser in squeaksource.com



On Feb 6, 2011, at 7:38 AM, Eduard Poschinger wrote:
Hello all,


I´m looking for a possibility to get the contents of a site. The entire html as a Stream without opening the site.


Many thanks and greetings

Eduard
_______________________________________________
seaside mailing list
seaside at lists.squeakfoundation.org
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside


-----Integrierter Anhang folgt-----

_______________________________________________
seaside mailing list
seaside at lists.squeakfoundation.org
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/seaside/attachments/20110206/fd573d0a/attachment.htm


More information about the seaside mailing list