Polynomial Division Challenge

mmille10 at comcast.net mmille10 at comcast.net
Fri Jul 13 21:25:57 UTC 2007


nicolas cellier wrote:
"Other things make Smalltalk a bad candidate for such silly games.
1) Base library names are long : having something like +, & , or < 
instead of aStream nextPutAll: would help shorten a lot.
2) Class creation methods are lengthy...
3) Categories are useless for this kind of test and spoil byte-efficiency...

I did not fulfill stdin/stdout requirement. This is a must add in 
Squeak, VW and other Smalltalks.

I feel like best candidate would be GNU smalltalk to compete against 
these Ruby/Python/PHP/Perl oriented rules. Beside, it could be automated 
on their site.

Of course, as Bert already said, the goal is stupid from the code 
quality point of view. It leads to uggly unreadable code. Not the 
Smalltalk spirit. But if you want to play, the repository is open for 
write..."

Hmm! I see what you're saying. You make a really good argument against Code Golf. After I read your note I thought to myself, "Well it was based on Perl Golf for crying out loud." I felt kind of dumb not seeing that as a red flag. I paid a little attention to the metrics that were used, like "keystrokes", but I didn't fully consider their implications. You're right. It just encourages ugly code. When I looked at the stats of the "top 10" winners in the Polynomial Division problem, I noticed that the Ruby and Perl versions were winning out over the Python versions. I've read that Pythoners tend to be more concerned with a "correct way to do things", which puts them at a disadvantage using this sort of criteria. You're right that things like the length of method names can also put a program at an advantage or disadvantage in the rankings.

Oh well. Sorry I suggested it.

As I was responding here, I was trying to imagine a programming contest that measured what is truly valuable about programming. It seems to me that one would have to have human judges, not a machine doing objective measurements, and they would need to measure more than "does the program work?", and "does it run in a reasonable amount of time?" (which is important), but also look at best practices like how well does it implement encapsulation, how concise is it, and how good are its abstractions. But then they'd be rather objective, wouldn't they? They'd be like figure skating competitions. They try to keep scoring as objective as possible, but you can't help the subjective judgements that inevitably enter into it.

The last time I participated in a programming contest was many years ago. The ACM held them every year. They were time trials, and really all they did was run test code through the programs and see if they produced correct output. The source code could've been complete gibberish as far as they were concerned. I always liked the challenge (even though I consistently scored low by their criteria), but I didn't like the emphasis on "get it done as fast as possible".

---Mark
mmille10 at comcast.net



More information about the Squeak-dev mailing list