On Aug 27, 2007, at 1:29 AM, Igor Stasenko wrote:

Cascade is useful. It allows me to write code like that:
ogl
                glTexParameteri: GLTexture2d with: GLTextureMinFilter
with: GLLinear;
glTexParameteri: GLTexture2d with: GLTextureMagFilter with: GLLinear;
glTexParameteri: GLTexture2d with: GLTextureWrapS with: GLClamp;
glTexParameteri: GLTexture2d with: GLTextureWrapT with: GLClamp;
glPixelMapfv: GLPixelMapIToA with: 2 with: (FloatArray with: 0.0 with: 1.0).

But at the same time it stinks because you don't using the evaluation
result of previously sent message, you simply drop it. And from this
point i feel that something wrong with such design. Why computer needs
to waste cycles to evaluate result of message when its simply not used
at the end?

For side effects ! :-)

In contrast , a pipe does not drops evaluation result, but reuses it
in expression that follows. From this point i like it more than
cascade.

Same with period '.'
Each time you placing a period in code, you telling compiler to drop
the evaluated result of your expression.
Can it be considered as good from computational point of view? Of
course its not. You forcing a computer to waste the cycles just for
nothing.


Same here for side effects.

Same with implicit return self from a method. I see a good reasons why
i don't want return anything from a method, because it returns nothing
useful or meaningful, and never used for continue evaluation in other
expressions.

And i think that maybe from computational  point of view, it might be
useful to indicate if we need to return result from a method or don't.
So, all messages which chained in cascade expressions can be sent with
flag 'drop result=true' and same for messages which is the last before
period '.'. In other words we can have two forms of 'send' operation
(bytecode): Send with return and send without return.
I think this can save us from wasting a processor cycles just for nothing.
I know that such semantics belongs mainly to compiler, not language
itself, but i just want to point out, how language design influences
the implementation, and how good or bad it can turn out at the end.


This is a very old question on purity and side-effect that seems to have reached a certain kind of agreement with Monads.
Lots of papers treat this issue. Some suggested Effects Typing, some Continuations, then the Haskell guys figured
out that with type classes they could create a polymorphic monadic bind and did the trick.

But I think that what you say doesn't apply to Smalltalk because it's dynamically typed.
How would you know at compile time if a method does side-effects ....
What if a method called in a cascade does side effects , but the cascade itself seems pure ?

I think it's complex stuff...

Haskell does the "markings" (excuse the name) you talk about with monads 
and anything "not marked" doesn't do any sideffects, it's pure.
By adding graph manipulation (reducing & updating) at runtime they evaluate functions "by name"
and than by updating the runtime graph they avoid re-computation.

In haskell basically every expression is in a [ ... ] (well sort of they use graph manipulation)
And it's sent the message value only if needed. And when the the value it's returned the block gets 
removed and it's updated with the value... so no more need to evaluate it, if you need it again.
It's a powerful optimization ! Too bad that to get that you need to but Everything in a [ ... ] 
(strictness analysis fixes that a bit).

But the price to pay is high in my opinion ... 
functional becomes beautiful, but imperative stuff is a bit too hard ( not syntactically because of the 1 ton of syntax sugar)