Hi Kim,
I _am_ well and still very much interested, even if my work has led in a very different direction. I fully understand the community's need for some kind of assessment system. I seem to remember that something of the sort was a topic of conversation at the very first Squeakfest. I am glad to see that Tim has modeled the ranking/trust system like some kind of peer review. Nevertheless, I am still wary of whether it will work or work well; peer review is problematic even in the sciences. Some kind of "expert" assessment or assessment tool (a rubric or checklist that asks does it work, does it provide instructions, etc?) might be better suited to your needs. Projects could then be listed in order of how many or which rubric items have been checked. I don't know who if anyone among the so-called experts has the time to devote to assessments, but I also don't know who if anyone among the student contributors will devote that time to peer review. While a chunk of my life has been devoted to solving problems with technologies, I have grown both wary and weary of technological solutions. I hope this solution will be thoroughly tested and not just to see if it functions but to see if it actually solves a human problem without introducing new problems.
Best of luck,
John
On Wed, Sep 30, 2009 at 11:20 AM, Kim Rose kim.rose@vpri.org wrote:
Hey, John,
Nice to see you are still around and interested -- hope you are doing well. Thanks for the comments.
As for me and Viewpoints -- there is one reason only for ranking and that is filtering in an attempt to bring the best educational examples and those illustrative of etoys' strengths to the attention of the community. So, I, for one, am hoping that those "voting" are not voting for any particular child, author or person, but for the example itself -- I hope those voting/ranking are asking "is this a fine exemplar to help teach a concept, principle, or powerful idea?". The more the example answers back with a "yes" the higher the rank. Also, the more complete an example the higher the rank. How disclosing of what the Etoy is meant to do is in the example? Does it have an explanation of what it is? If it is a game does it have instruction on how to play and the goal of the game?
I hope that teachers who encourage their students to upload projects will *only* allow sharing if their student provides an "About" flap, or some intro/explanation of the Etoy and some instruction on where to start, the aim of the project, etc. The most beautiful simulation of something will fall completely flat with someone if they have no real idea of what they are looking at or what the script they are playing with is meant to do. Etoys are intended to teach and help us learn; learning cannot occur without context. No author should assume their project will be self disclosing; it will not. Our users have a variety of levels of expertise in both Etoys and subject matter areas; providing more context can only help.
If there is no "ranking" in place and everything is posted the offerings soon get overwhelming and most difficult to navigate. We found this years ago when we had an active "super-swiki". Projects that were no more than a single sketch with no script at all were posted and only "muddied the waters" for those wading in an attempt to find something with some meaningful content.
I hope the community will bear this in mind when "ranking" what gets uploaded and understand that not everyone can be represented in a featured showcase. -- Kim
On Sep 30, 2009, at 7:58 AM, voiklis wrote:
Hello all,
I apologize for joining this conversation so late; I am a longtime, though recently quiet, member of the community.
I understand that the ranking system is an attempt to enable people to judge the quality of a contribution (or contributor) based on some directly observable measure of reputation. On first sight, one could argue that such a system overcomes the problem that in the face-to-face world reputation is not directly observable. In order to get an honest assessment of a person's reputation, one has to invest a lot of time building trust with the people familiar with that person. Even then, a trustworthy assessment requires direct observation of the person's actions.
A ranking system would appear to reduce that effort by half; knowing the person's reputation among his/her peers, one only needs to assess the work.
The problem with that reasoning is that electronic ranking systems are highly susceptible to manipulation. Building reputation becomes the goal of the activity for many people and they use all sorts of seemingly harmless social and technological means to inflate their numbers. Our lab has been studying this phenomenon through both observational studies of online communities and laboratory experiments. The two papers below report on the phenomenon as it presents itself in Digg, the news aggregation site.
The first paper demonstrates that a tit-for-tat game of reciprocity inflates the reputation of contributors and their contributions without reflecting anything substantive about their contributions. The second paper really brings out the negative consequences of this phenomenon. The paper reports on an experiment where people judged how interesting they found the contribution. The ranking values of the articles were set by the investigator; sometime the rank of the article was set high, at other times low. Experimental subjects rated higher-ranked contributions as more interesting than lower ranked contributions. The same article was rated as highly interesting when its rank was set high and uninteresting when ranked low. Duncan Watts (of small-world networks fame) observed the same phenomenon with music rating.
What this means in the present discussion is that people will likely ignore low ranking contributions. Worse still, when they do actually look at those contributions they are likely to see what the ranking value led them to expect rather than the qualities of the contribution itself.
Unless we can find scientific research that demonstrates any benefits to ranking, I think we should be wary of using such systems.
All best,
John
Sadlon, E., Sakamoto, Y., Dever, H. J., & Nickerson, J. V. (2008). The karma of Digg: Reciprocity in online social networks. In R. Gopal and R. Ramesh (Eds.), Proceedings of the 18th Annual Workshop on Information Technologies and Systems. http://cog.mgnt.stevens-tech.edu/~yasu/papers/reciprocity.pdf
Sakamoto, Y., Ma, J., & Nickerson, J. V. (2009). 2377 people like this article: The influence of others' decisions on yours. In N. Taatgen, H. van Rijn, L. Schomaker, and J. Nerbonne (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society.http://cog.mgnt.stevens-tech.edu/~yasu/papers/cogscidigg1.pdf
Salganik, M. J., Dodds, P. S., and Watts, D. J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762):854-856. http://dx.doi.org/10.1126/science.1121066%5B/url]
-------------------- m2f --------------------
(from forum) http://squeakland.org/forums/viewtopic.php?p=11554#11554
-------------------- m2f -------------------- _______________________________________________ squeakland mailing list squeakland@squeakland.org http://lists.squeakland.org/mailman/listinfo/squeakland
Viewpoints Research is a 501(c)(3) nonprofit organization dedicated to improving "powerful ideas education" for the world's children and advancing the state of systems research and personal computing. Please visit us online at www.vpri.org
Squeakland mailing list Squeakland@squeakland.org http://lists.squeakland.org/mailman/listinfo/squeakland
squeakland@lists.squeakfoundation.org