I am not decided against the generalized null pattern just yet [1]. I just wanted to point out that no one is going to put 5 temps in a row like that. The exception might be more common but I never see it.[1] I am not going to say it's bad, nor am I convinced it is a sure win. Personally I can't recall a single time I have needed something like this. I do tend to chain messages often, but I guess I only do that when I don't expect a nil. If one comes up I want to see it right then to track it down. But perhaps in other domains then I have been programming in so far it would come up more. At any rate I'm glad it's out there to try out if I need it. Thanks for that Keith.> From: water@tunes.org> Date: Thu, 26 Jul 2007 12:41:05 -0700> To: squeak-dev@lists.squeakfoundation.org> Subject: Re: Message Eating Null - article> > Hello,> > I would like to point out that both of these claims are unrealistic.> > Specifically, Keith's examples all use low-level control-flow > messages like "notNil ifTrue: [" whereas non-novice Smalltalkers > should be using "ifNotNil:" and "ifNotNilDo: [:obj |".> > The example would best be:> > widget setStringValue: (#(office phone lastNumberDialed asString) > inject: person into: [:obj :sel | obj ifNotNil: [obj perform: sel]])> > without the message-eating Null pattern. Or if #perform: irks you, > you can use:> > widget setStringValue: (person office ifNotNilDo: [:office | office > phone ifNotNilDo: [:phone | phone lastNumberDialed ifNotNilDo: > [:number | number asString]]])> > Like Keith, I find this a problem to have to write and maintain in > code, but I'm not compelled to write long essays on it. I tend to > think that chained message-sends represent a way in which distantly- > related code can reach across protocols a bit too far. And Keith is > being a bit unfair in that a several-page-long essay (with a really > wide column justification width) will not be adequately rebutted in > an email forum, thus keeping the discussion from balance.> > My own view is that message-eating null is usable in controlled > circumstances, but that in general one cannot control the > circumstances (you can't know who *won't* receive a null as a > parameter, so code written without that in mind fails in strange > ways) in Smalltalk-80.> > It is also worth pointing out that "message-eating null" most closely > resembles the apposition of type T to NIL in Lisp, where one is the > supertype of every type and the other is the subtype of every type. > In Lisp, you do not interchange these, for good reason that I believe > applies here but don't have time to gather evidence in support.> > I think his packages which use message-eating-null would be a lot > more palatable if they didn't...> > On Jul 26, 2007, at 12:06 PM, Keith Hodges wrote:> > >> Yuck, I wouldn't do either of those. I would do:> >>> >> widget setStringValue: #(office phone lastNumberDialed asString) > >> inject: person into: [:obj :sel| o == nil ifTrue: [ nil ] ifFalse: > >> [ obj sel ] ]> > you would?> >> > To me the above looks as close to perl as smalltalk is hopefully > > ever likely to get!> >> > ;-)> > That is disingenuous. Please use honest arguments.> > > Keith> >> > p.s. Someone once accused me of being a PL/1 programmer in a former > > life.> > --> -Brian> http://briantrice.com%3E > _________________________________________________________________ Local listings, incredible imagery, and driving directions - all in one place! Find it! http://maps.live.com/?wip=69&FORM=MGAC01
I think proper exception handling combined with reflection can produce a very readable result without the flakiness of message-eating nil:
lastNumber := [person office phone lastNumberDialed asString] ifNilShowsUp: ['']
with #ifNilShowsUp: defined as
BlockContext>>ifNilShowsUp: aBlock ^self on: MessageNotUnderstood do: [:ex | (ex receiver isNil and: [ex signalerContext sender = self]) ifTrue: [ex return: aBlock value] ifFalse: [ex pass]]
Or in other words, the second block is evaluated to produce the final result if one of the messages inside the first block returns nil and that nil doesn't understand the following message. All other failures, including MNUs by nil inside the messages sent from the block, fail "properly".
Cheers,
--Vassili
P.S. "ex signalerContext sender = self" would fail to capture a relevant MNU in some cases. The 100% solution is an exercise for the reader. :)
I think proper exception handling combined with reflection can produce a very readable result without the flakiness of message-eating nil:
I think that this misses the point.
The power of the message eating null is in the fact that you have a real object to pass around a system. It is an item that you can use to model things, or more precisely non-things with.
I used it to model empty slots in a model of telecoms equipment for example, and yes in the right place it can simplify implementation.
Using exception handling around a long calling chain is just not worth the effort.
As for flakiness, a generic message eating null, that does its job could be less flaky than maintianing specific null-objects for different domain models.
best regards
Keith
On 7/26/07, Keith Hodges keith_hodges@yahoo.co.uk wrote:
As for flakiness, a generic message eating null, that does its job could be less flaky than maintianing specific null-objects for different domain models.
The problem with message-eating null is that it will happily do *more* than its job. This is what I find objectionable. If you are somewhat familiar with Haskell, a message-eating null is very much like the Maybe monad. But unlike Haskell, without a type system to contain it all object references throughout the system effectively become maybe-values, and all implicit continuation calls turn into bind operators.
In other words this means that a null accidentally leaking outside the area where it was expected can silently cause some actions to not happen. What you deal with in that case is not just a null of an unknown origin that Nevin focused on in his write-up, but things breaking because some code didn't run in the past because of a stray null value. When we write ifTrue:ifFalse:, we expect that each time it runs one of the branches is taken. In a system with message-eating null, this is no longer an invariant.
Perhaps we disagree on what constitutes flaky. Flaky in my book is poor locality of failures with respect to their causes. If I open a door and a window falls out, that's flaky. So is if I change a method and things stop working in an entirely different place because a null value leaked, ended up as the receiver of ifTrue:ifFalse: and disabled both execution branches. Indeed, maintaining specialized null objects is more work (however, more often than not they are part of a hierarchy of classes whose protocols you have to coordinate anyway), but that work in the end produces a program that is more predictable and is more likely to break where it's broken. I value that in my programs.
Cheers,
--Vassili
Nicely said Vassili
Sean Glazier
-----Original Message----- From: squeak-dev-bounces@lists.squeakfoundation.org [mailto:squeak-dev-bounces@lists.squeakfoundation.org] On Behalf Of Vassili Bykov Sent: Thursday, July 26, 2007 4:18 PM To: The general-purpose Squeak developers list Subject: Re: Message Eating Null - article
On 7/26/07, Keith Hodges keith_hodges@yahoo.co.uk wrote:
As for flakiness, a generic message eating null, that does its job could be less flaky than maintianing specific null-objects for different domain models.
The problem with message-eating null is that it will happily do *more* than its job. This is what I find objectionable. If you are somewhat familiar with Haskell, a message-eating null is very much like the Maybe monad. But unlike Haskell, without a type system to contain it all object references throughout the system effectively become maybe-values, and all implicit continuation calls turn into bind operators.
In other words this means that a null accidentally leaking outside the area where it was expected can silently cause some actions to not happen. What you deal with in that case is not just a null of an unknown origin that Nevin focused on in his write-up, but things breaking because some code didn't run in the past because of a stray null value. When we write ifTrue:ifFalse:, we expect that each time it runs one of the branches is taken. In a system with message-eating null, this is no longer an invariant.
Perhaps we disagree on what constitutes flaky. Flaky in my book is poor locality of failures with respect to their causes. If I open a door and a window falls out, that's flaky. So is if I change a method and things stop working in an entirely different place because a null value leaked, ended up as the receiver of ifTrue:ifFalse: and disabled both execution branches. Indeed, maintaining specialized null objects is more work (however, more often than not they are part of a hierarchy of classes whose protocols you have to coordinate anyway), but that work in the end produces a program that is more predictable and is more likely to break where it's broken. I value that in my programs.
Cheers,
--Vassili
Vassili Bykov wrote:
On 7/26/07, Keith Hodges keith_hodges@yahoo.co.uk wrote:
As for flakiness, a generic message eating null, that does its job could be less flaky than maintianing specific null-objects for different domain models.
The problem with message-eating null is that it will happily do *more* than its job. This is what I find objectionable. If you are somewhat familiar with Haskell, a message-eating null is very much like the Maybe monad. But unlike Haskell, without a type system to contain it all object references throughout the system effectively become maybe-values, and all implicit continuation calls turn into bind operators.
In other words this means that a null accidentally leaking outside the area where it was expected can silently cause some actions to not happen. What you deal with in that case is not just a null of an unknown origin that Nevin focused on in his write-up, but things breaking because some code didn't run in the past because of a stray null value. When we write ifTrue:ifFalse:, we expect that each time it runs one of the branches is taken. In a system with message-eating null, this is no longer an invariant.
I see your point, however, I dont see that as much different to nil. Have you ever found something not working because a value you expected to be initialize was not, some time in the past.
I am not suggesting indiscriminate use of null. It is useful in some situations.
In actual fact in versions prior to my latest package releases null would not have been able to ignore ifTrue ifFalse anyway, it would have to throw an error. Neither can it in other languages such as ruby since ifTrue ifFalse are somewhat more wired in to the underlying runtime than in smalltalk.
best regards
Keith
Further more do you have any idea where in the past the initialization failed to happen.
With null you can find out where it started.
I see your point, however, I dont see that as much different to nil. Have you ever found something not working because a value you expected to be initialize was not, some time in the past.
I am not suggesting indiscriminate use of null. It is useful in some situations.
In actual fact in versions prior to my latest package releases null would not have been able to ignore ifTrue ifFalse anyway, it would have to throw an error. Neither can it in other languages such as ruby since ifTrue ifFalse are somewhat more wired in to the underlying runtime than in smalltalk.
Further more do you have any idea where in the past the initialization failed to happen.
With null you can find out where it started.
I've only just noticed this thread. As the original author of the article cited, I would like to make a couple of comments:
1. As stated in the article, a message eating nil has been standard behavior in Objective-C (the current workhorse language on the Mac, and prior to that, NeXT machines). I've *never* seen what I would call a genuine problem with it in this environment.
2. My use of a generalized Null object in Smalltalk (mimicking Objective-C) began as an experiment. The experiment lasted many, many years of general use of the pattern, to see what problems, if any, resulted. My conclusions from the experiment are these:
* First of all, code should be layered-- presentation (GUI) code should be kept in the presentation layer, and the domain code should be kept in the domain layer (where the bulk of the application logic should reside).
* With well-layered code, a generalized Null has *never* created a problem for me in the domain layer, and has *always* simplified the code.
* I have hit issues with a generalized Null used in the presentation layer. I don't, as of now, recall what the issues were, but if my memory hasn't completely failed, it was because the generalized Null did not integrate well with the existing GUI frameworks already found in all Smalltalk implementations. For example, I seem to remember parts of certain GUI frameworks *expecting* an exception to occur as part of the normal GUI generating framework, and if the exception did not occur, the framework simply broke. I remember frowning at this, because I think exceptions should only be used for exceptional circumstances, and not be a designed-in part of the normal processing flow of the GUI. I can't give specifics now, but that is the issue that now sticks to my mind.
I'd also like to comment that had the GUI frameworks been created in a message-eating nil (Null) framework from their inception, the issues I hit would not have existed. But in any case, I would *not* recommend the pattern to be used indiscriminately throughout all of your own code, but I also see no reason *not* to use the pattern if the pattern is isolated to your domain layer only.
Also, all of this experience predates any web work, and was pure client-server code and client-server experience.
Nevin
Also, all of this experience predates any web work, and was pure client-server code and client-server experience.
Also, when you become CEO of a start-up, your perspective changes dramatically. As the sole tech person for bountifulbaby.com, with whatever added functionality has been needed for the company, I have found myself wanting and needing to get it up and running *right now*, regardless of what the code looked like. And this has led me down the hacker path of ugly code, and not even caring if it was ugly.
I know that view can be dangerous. I just wanted to point out that perspectives can change dramatically, depending on your role, as well as whether or not it is your own money you are playing with.
Nevin
Unfortunately nil will understand some messages... In squeak nil asString -> 'nil'
Nicolas
Vassili Bykov a écrit :
I think proper exception handling combined with reflection can produce a very readable result without the flakiness of message-eating nil:
lastNumber := [person office phone lastNumberDialed asString] ifNilShowsUp: ['']
with #ifNilShowsUp: defined as
BlockContext>>ifNilShowsUp: aBlock ^self on: MessageNotUnderstood do: [:ex | (ex receiver isNil and: [ex signalerContext sender = self]) ifTrue: [ex return: aBlock value] ifFalse: [ex pass]]
Or in other words, the second block is evaluated to produce the final result if one of the messages inside the first block returns nil and that nil doesn't understand the following message. All other failures, including MNUs by nil inside the messages sent from the block, fail "properly".
Cheers,
--Vassili
P.S. "ex signalerContext sender = self" would fail to capture a relevant MNU in some cases. The 100% solution is an exercise for the reader. :)
squeak-dev@lists.squeakfoundation.org