We should ask why do people want to teach Python instead of Smalltalk? Why do people veer away from Smalltalk with add-ons like Etoys, Scratch, and many other paradigms like Patterns and CRC cards, which aren't as good for commercial programming, thus really aren't as good to teach children? What can be done to remodel Squeak to provide all the features more commercially popular languages have?
Earlier a post saying a boss didn't want a GUI that a combination of buttons would bring up all sorts of things his employees shouldn't be playing with. So put a cleaner commercial GUI on the list. Maybe the preferences switch could be in its own file or as the first character in Sources to reduce file count. The Changes file shouldn't be needed in a deployed application. Is there any way to cut the deployment image down to one file containing both the Sources and VM like an .exe in any other language?
I've written on the need to fix Garbage Collection control so it can be turned off like Python allows to enable Squeak to be used for real time projects like self driving cars, since a 100ms delay can veer 8 feet off course, fully into a lane of oncoming traffic.
Recently I learned from a UC Berkeley website it takes 100ms to recognize the objects in a picture too. Does that mean the future will have a cloud in every car and Squeak needing to conduct image analysis in hundreds of cooperating cores to get safe real time performance?
The state of Squeak for all its benefits seems like a collection of law statutes, a big set of text contributed by years of legislation that nobody can remember all of and some of which makes little sense. Maybe a major rewrite starting from zero would help?
The GUI - while it has many nice features, it somehow seems to lack the crisp precision, ease, and speed of commercial software like Solidworks. I like how Squeak comes up and is ready to go far quicker than say Amazon's Audible application but Squeak graphics aren't so fast or easy to program as Solidworks.
Recently I saw a couple of short videos on two moderate size robots where users extolled their ease of programming. Perhaps Smalltalk needs a new top level rule based language to improve programmer efficiency. I'm working on this one. And as my prototype was so easy, it angers me to think of all the time I spent being both ignorant and afraid after seeing various compiler books like the "Dragon Book" intentionally make compiler writing a difficult graduate level course instead of an easy advanced beginner level assignment.
But one thing I have in common with my Raspberry Pi, when my utilization is maxed for too long, I overheat and shut down. I can write simple stuff like this when it's too hot to do real work. But even multiple cores get too hot when they are maxed out. So a real time computer needs heat control or cooling overkill in case a vital complex situation clogs the bandwidth. Well, pray about it.
On 5 Jul 2015 at 16:22, Kirk Fraser wrote:
We should ask why do people want to teach Python instead of Smalltalk? Why do people veer away from Smalltalk with add-ons like Etoys, Scratch, and many other paradigms like Patterns and CRC cards, which aren't as good for commercial programming, thus really aren't as good to teach children? What can be done to remodel Squeak to provide all the features more commercially popular languages have?
Earlier a post saying a boss didn't want a GUI that a combination of buttons would bring up all sorts of things his employees shouldn't be playing with. So put a cleaner commercial GUI on the list. Maybe the preferences switch could be in its own file or as the first character in Sources to reduce file count. The Changes file shouldn't be needed in a deployed application. Is there any way to cut the deployment image down to one file containing both the Sources and VM like an .exe in any other language?
I've written on the need to fix Garbage Collection control so it can be turned off like Python allows to enable Squeak to be used for real time projects like self driving cars, since a 100ms delay can veer 8 feet off course, fully into a lane of oncoming traffic.
Recently I learned from a UC Berkeley website it takes 100ms to recognize the objects in a picture too. Does that mean the future will have a cloud in every car and Squeak needing to conduct image analysis in hundreds of cooperating cores to get safe real time performance? The state of Squeak for all its benefits seems like a collection of law statutes, a big set of text contributed by years of legislation that nobody can remember all of and some of which makes little sense. Maybe a major rewrite starting from zero would help?
" like a collection of law statutes" is a good analogy. Cuis seems like a major rewrite of Squeak and is simpler, easier to understand. What do you think of Cuis?
The GUI - while it has many nice features, it somehow seems to lack the crisp precision, ease, and speed of commercial software like Solidworks. I like how Squeak comes up and is ready to go far quicker than say Amazon's Audible application but Squeak graphics aren't so fast or easy to program as Solidworks.
Recently I saw a couple of short videos on two moderate size robots where users extolled their ease of programming. Perhaps Smalltalk needs a new top level rule based language to improve programmer efficiency. I'm working on this one. And as my prototype was so easy, it angers me to think of all the time I spent being both ignorant and afraid after seeing various compiler books like the "Dragon Book" intentionally make compiler writing a difficult graduate level course instead of an easy advanced beginner level assignment.
But one thing I have in common with my Raspberry Pi, when my utilization is maxed for too long, I overheat and shut down. I can write simple stuff like this when it's too hot to do real work. But even multiple cores get too hot when they are maxed out. So a real time computer needs heat control or cooling overkill in case a vital complex situation clogs the bandwidth. Well, pray about it.
- Dan
I used Cuis at first to display hand written G-Codes in graphic form for a printed circuit board. I kept up with Cuis through a few versions and found a couple of bugs for Juan. Eventually Casey advised going to Squeak so I did. Perhaps my requests were getting annoying.
I'm mostly interested in using a multi-core Squeak with GC control for my robot. Tim says a multi-core VM is coming for the new Pi. He hasn't answered on GC control. With muliti-core a user need not see GC control but the system should provide 100% GC free service even if behind the scenes it momentarily toggles one GC off and lets the other complete.
With real time driving, which I hope my robot will do some day, getting rid of all 100ms delays is vital.
On Sun, Jul 5, 2015 at 4:54 PM, Dan Norton dnorton@mindspring.com wrote:
On 5 Jul 2015 at 16:22, Kirk Fraser wrote:
We should ask why do people want to teach Python instead of Smalltalk? Why do people veer away from Smalltalk with add-ons like Etoys, Scratch, and many other paradigms like Patterns and CRC cards, which aren't as good for commercial programming, thus really aren't as good to teach children? What can be done to remodel Squeak to provide all the features more commercially popular languages have?
Earlier a post saying a boss didn't want a GUI that a combination of buttons would bring up all sorts of things his employees shouldn't be playing with. So put a cleaner commercial GUI on the list. Maybe the preferences switch could be in its own file or as the first character in Sources to reduce file count. The Changes file shouldn't be needed in a deployed application. Is there any way to cut the deployment image down to one file containing both the Sources and VM like an .exe in any other language?
I've written on the need to fix Garbage Collection control so it can be turned off like Python allows to enable Squeak to be used for real time projects like self driving cars, since a 100ms delay can veer 8 feet off course, fully into a lane of oncoming traffic.
Recently I learned from a UC Berkeley website it takes 100ms to recognize the objects in a picture too. Does that mean the future will have a cloud in every car and Squeak needing to conduct image analysis in hundreds of cooperating cores to get safe real time performance?
The state of Squeak for all its benefits seems like a collection of law statutes, a big set of text contributed by years of legislation that nobody can remember all of and some of which makes little sense. Maybe a major rewrite starting from zero would help?
" like a collection of law statutes" is a good analogy. Cuis seems like a major rewrite of Squeak and is simpler, easier to understand. What do you think of Cuis?
The GUI - while it has many nice features, it somehow seems to lack the crisp precision, ease, and speed of commercial software like Solidworks. I like how Squeak comes up and is ready to go far quicker than say Amazon's Audible application but Squeak graphics aren't so fast or easy to program as Solidworks.
Recently I saw a couple of short videos on two moderate size robots where users extolled their ease of programming. Perhaps Smalltalk needs a new top level rule based language to improve programmer efficiency. I'm working on this one. And as my prototype was so easy, it angers me to think of all the time I spent being both ignorant and afraid after seeing various compiler books like the "Dragon Book" intentionally make compiler writing a difficult graduate level course instead of an easy advanced beginner level assignment.
But one thing I have in common with my Raspberry Pi, when my utilization is maxed for too long, I overheat and shut down. I can write simple stuff like this when it's too hot to do real work. But even multiple cores get too hot when they are maxed out. So a real time computer needs heat control or cooling overkill in case a vital complex situation clogs the bandwidth. Well, pray about it.
- Dan
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote:
I used Cuis at first to display hand written G-Codes in graphic form for a printed circuit board. I kept up with Cuis through a few versions and found a couple of bugs for Juan. Eventually Casey advised going to Squeak so I did. Perhaps my requests were getting annoying.
Perhaps you misinterpreted what Casey said? Definitely have all options (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in particular has a very active mailing lists and you'll find a lot of existing code to play with. I personally do most of my development in Cuis, some in Pharo (for things like Seaside that don't yet exist in Cuis), and a bit still in Squeak. They all have their place depending on your needs. Given your emphasis on performance, I would think that Cuis is going to be the place where you can maximize it. (all the above Smalltalk variants use essentially the same core VM, it's the plugins and images that really differ)
I'm mostly interested in using a multi-core Squeak with GC control for my robot. Tim says a multi-core VM is coming for the new Pi. He hasn't answered on GC control. With muliti-core a user need not see GC control but the system should provide 100% GC free service even if behind the scenes it momentarily toggles one GC off and lets the other complete.
Are you *sure* that's what Tim said? I see a thread where he's talking about *build* performance (i.e. compiling the C code for the VM) on a quad-core with the caveat 'even if Squeak can't directly take advantage' (i.e. no multi-core VM)
With real time driving, which I hope my robot will do some day, getting rid of all 100ms delays is vital.
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
On Sun, Jul 5, 2015 at 4:54 PM, Dan Norton dnorton@mindspring.com wrote: On 5 Jul 2015 at 16:22, Kirk Fraser wrote:
> > We should ask why do people want to teach Python instead of > Smalltalk? Why do people veer > away from Smalltalk with add-ons like Etoys, Scratch, and many other > paradigms like Patterns > and CRC cards, which aren't as good for commercial programming, thus > really aren't as good to > teach children? What can be done to remodel Squeak to provide all > the features more > commercially popular languages have? > > Earlier a post saying a boss didn't want a GUI that a combination of > buttons would bring up all > sorts of things his employees shouldn't be playing with. So put a > cleaner commercial GUI on the > list. Maybe the preferences switch could be in its own file or as > the first character in Sources to > reduce file count. The Changes file shouldn't be needed in a > deployed application. Is there any > way to cut the deployment image down to one file containing both the > Sources and VM like an > .exe in any other language? > > I've written on the need to fix Garbage Collection control so it can > be turned off like Python allows > to enable Squeak to be used for real time projects like self driving > cars, since a 100ms delay can > veer 8 feet off course, fully into a lane of oncoming traffic. > > Recently I learned from a UC Berkeley website it takes 100ms to > recognize the objects in a > picture too. Does that mean the future will have a cloud in every > car and Squeak needing to > conduct image analysis in hundreds of cooperating cores to get safe > real time performance? > > The state of Squeak for all its benefits seems like a collection of > law statutes, a big set of text > contributed by years of legislation that nobody can remember all of > and some of which makes little > sense. Maybe a major rewrite starting from zero would help? > " like a collection of law statutes" is a good analogy. Cuis seems like a major rewrite of Squeak and is simpler, easier to understand. What do you think of Cuis? > The GUI - while it has many nice features, it somehow seems to lack > the crisp precision, ease, > and speed of commercial software like Solidworks. I like how > Squeak comes up and is ready to > go far quicker than say Amazon's Audible application but Squeak > graphics aren't so fast or easy > to program as Solidworks. > > Recently I saw a couple of short videos on two moderate size robots > where users extolled their > ease of programming. Perhaps Smalltalk needs a new top level rule > based language to improve > programmer efficiency. I'm working on this one. And as my > prototype was so easy, it angers me > to think of all the time I spent being both ignorant and afraid > after seeing various compiler books > like the "Dragon Book" intentionally make compiler writing a > difficult graduate level course instead > of an easy advanced beginner level assignment. > > But one thing I have in common with my Raspberry Pi, when my > utilization is maxed for too long, I > overheat and shut down. I can write simple stuff like this when > it's too hot to do real work. But > even multiple cores get too hot when they are maxed out. So a real > time computer needs heat > control or cooling overkill in case a vital complex situation clogs > the bandwidth. Well, pray about > it. - Dan _______________________________________________ Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Tim says a multi-core VM is coming for the new Pi.
Are you *sure* that's what Tim said?
Of course my over hopeful misinterpretation is possible.
"Squeak runs quite well on a Pi, especially a pi2 - and we're working on the Cog dynamic translation VM right now, which should with luck triple typical performance." - timrowledge » Thu Feb 19, 2015 https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698...
The trick to getting rid of long delays is more a function of
preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
Thanks. Maybe when I find, make, or build a better place to work, I'll be able to tackle some of that. I wouldn't be surprised if a VM is as easy as a compiler once one actually starts working on it.
On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) pbpublist@gmail.com wrote:
On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote:
I used Cuis at first to display hand written G-Codes in graphic form for a printed circuit board. I kept up with Cuis through a few versions and found a couple of bugs for Juan. Eventually Casey advised going to Squeak so I did. Perhaps my requests were getting annoying.
Perhaps you misinterpreted what Casey said? Definitely have all options (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in particular has a very active mailing lists and you'll find a lot of existing code to play with. I personally do most of my development in Cuis, some in Pharo (for things like Seaside that don't yet exist in Cuis), and a bit still in Squeak. They all have their place depending on your needs. Given your emphasis on performance, I would think that Cuis is going to be the place where you can maximize it. (all the above Smalltalk variants use essentially the same core VM, it's the plugins and images that really differ)
I'm mostly interested in using a multi-core Squeak with GC control for my robot. Tim says a multi-core VM is coming for the new Pi. He hasn't answered on GC control. With muliti-core a user need not see GC control but the system should provide 100% GC free service even if behind the scenes it momentarily toggles one GC off and lets the other complete.
Are you *sure* that's what Tim said? I see a thread where he's talking about *build* performance (i.e. compiling the C code for the VM) on a quad-core with the caveat 'even if Squeak can't directly take advantage' (i.e. no multi-core VM)
With real time driving, which I hope my robot will do some day, getting rid of all 100ms delays is vital.
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
On Sun, Jul 5, 2015 at 4:54 PM, Dan Norton dnorton@mindspring.com wrote: On 5 Jul 2015 at 16:22, Kirk Fraser wrote:
> > We should ask why do people want to teach Python instead of > Smalltalk? Why do people veer > away from Smalltalk with add-ons like Etoys, Scratch, and many other > paradigms like Patterns > and CRC cards, which aren't as good for commercial programming, thus > really aren't as good to > teach children? What can be done to remodel Squeak to provide all > the features more > commercially popular languages have? > > Earlier a post saying a boss didn't want a GUI that a combination of > buttons would bring up all > sorts of things his employees shouldn't be playing with. So put a > cleaner commercial GUI on the > list. Maybe the preferences switch could be in its own file or as > the first character in Sources to > reduce file count. The Changes file shouldn't be needed in a > deployed application. Is there any > way to cut the deployment image down to one file containing both the > Sources and VM like an > .exe in any other language? > > I've written on the need to fix Garbage Collection control so it can > be turned off like Python allows > to enable Squeak to be used for real time projects like self driving > cars, since a 100ms delay can > veer 8 feet off course, fully into a lane of oncoming traffic. > > Recently I learned from a UC Berkeley website it takes 100ms to > recognize the objects in a > picture too. Does that mean the future will have a cloud in every > car and Squeak needing to > conduct image analysis in hundreds of cooperating cores to get safe > real time performance? > > The state of Squeak for all its benefits seems like a collection of > law statutes, a big set of text > contributed by years of legislation that nobody can remember all of > and some of which makes little > sense. Maybe a major rewrite starting from zero would help? > " like a collection of law statutes" is a good analogy. Cuis seems like a major rewrite of Squeak and is simpler, easier to understand. What do you think of Cuis? > The GUI - while it has many nice features, it somehow seems to lack > the crisp precision, ease, > and speed of commercial software like Solidworks. I like how > Squeak comes up and is ready to > go far quicker than say Amazon's Audible application but Squeak > graphics aren't so fast or easy > to program as Solidworks. > > Recently I saw a couple of short videos on two moderate size robots > where users extolled their > ease of programming. Perhaps Smalltalk needs a new top level rule > based language to improve > programmer efficiency. I'm working on this one. And as my > prototype was so easy, it angers me > to think of all the time I spent being both ignorant and afraid > after seeing various compiler books > like the "Dragon Book" intentionally make compiler writing a > difficult graduate level course instead > of an easy advanced beginner level assignment. > > But one thing I have in common with my Raspberry Pi, when my > utilization is maxed for too long, I > overheat and shut down. I can write simple stuff like this when > it's too hot to do real work. But > even multiple cores get too hot when they are maxed out. So a real > time computer needs heat > control or cooling overkill in case a vital complex situation clogs > the bandwidth. Well, pray about > it. - Dan _______________________________________________ Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Here is another possibility.
Take a look at Symbolic Sound, a company that makes a system called Kyma. http://kyma.symbolicsound.com/
This company has been around for over twenty years. Its product has always been the fastest music synthesis system in the world that gives you total control over your sound. And by "total", I mean it gives you the ability to mathematically specify each sound wave. If you want, which is actually too much detail for most people. And it is all written in Smalltalk. Not Squeak, of course, since Squeak wasn't around then. But it could have been done in Squeak. And perhaps they ported it to Squeak. I haven't talked to them for a long time so I don't know what they did, but from the screen shots I think it is still a very old version of VisualWorks.
Anyway, how do they make it so fast? How can they make something that can be used for hours without any GC pauses?
The trick is that the sound is produced on an attached DSP. The GUI is in Smalltalk on a PC, and it generates code for the DSP. It is non-trivial making the compiler so fast that when you press "play", it can immediately start up the DSP and start producing sound. It does this (rather, it did this, since they might have changed the way it works) by just producing enough code to run the DSP for a few seconds and then starting the DSP while it generates the rest of the code. Kyma literally is writing the program into DSP memory at the same time as the DSP is running the program, producing sound.
Anyway, maybe that is the right approach to programming robots. You don't even need to use two computers. Imagine you had two computers, one running Squeak and the other a simple, real-time machine designed for controlling robots, but not very sophisticated. Squeak programs the simple computer, and can change its program dynamically. The simple computer has no gc. Since Squeak is a VM on a computer, the real-time computer can be a VM, too. So, you could be running them both on your PC, or you could run them on two separate computers for better performance.
I would be happy to talk more about this. But I'd like to talk about the beginning of Kyma. The owners of Symbolic Sound are Carla Scaletti and Kurt Hebel. Carla has a PhD in music, and Kurt in Electrical Engineering. I met Carla after she had her PhD. She wanted to get a MS in computer science so she could prove her computer music expertise, and she ended up getting it with me. She took my course on OOP&D that used Smalltalk. For her class project (back in 1987, I think) she wrote a Smalltalk program that ran on the Mac and that produced about ten seconds of sound, but it took several minutes to do it. Hardly real time. However, she was used to using a supercomputer (a Cray?) to generate sounds that still weren't real time, so she was very pleased that she could do it on the Mac at all, and though Smalltalk was slower than Fortran, in her opinion the ease of use was so great that she didn't mind the speed difference. As she put it, the speed difference between a Mac and a Cray was bigger than between Smalltalk and Fortran. She ended up turning this into the first version of Kyma and that became the subject of her MS thesis. I can remember when she showed it in class. She was the only woman in the class, and the other students knew she was a musician, i.e. not *really* a programmer. She was quiet during class, so they had not had a chance to have their prejudices remedied. Her demo at the end of the semester blew them away.
Kurt had built a DSP that their lab used. (The lab was part of the Plato project, I believe, one of the huge number of creative results of this very significant project at Illinois.) It was called the Capybara. This was before the time when you could just buy a good DSP on a chip, but that time came very soon and then they used the commercial chips. For her MS, she converted her system to use the Capybara, and this was when she figured out how to make it start making music within a fraction of a second of pressing the "play" button. Kurt also used Smalltalk with the Capybara. His PhD was about automatically designing digital filters, and his software also generated code for the Capybara, though it was actually quite different from Kyma.
The two of them worked on several different projects over the next few years, but kept improving Kyma. Along the way Kurt started building boards that had several commercial DSPs on them. Eventually they decided to go commercial and started Symbolic Sound.
-Ralph Johnson
On Sun, Jul 5, 2015 at 9:05 PM, Kirk Fraser overcomer.man@gmail.com wrote:
Tim says a multi-core VM is coming for the new Pi.
Are you *sure* that's what Tim said?
Of course my over hopeful misinterpretation is possible.
"Squeak runs quite well on a Pi, especially a pi2 - and we're working on the Cog dynamic translation VM right now, which should with luck triple typical performance." - timrowledge » Thu Feb 19, 2015
https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698... https://urldefense.proofpoint.com/v2/url?u=https-3A__www.raspberrypi.org_forums_viewtopic.php-3Ff-3D63-26t-3D100804-26p-3D698818-26hilit-3DSqueak-23p698818&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=1b7HP4jiYqv2J9GhsFL1RVTPCo7Df2jBmDU6HdJUM3Y&m=pRfWHn5kQQDUpmTGXHnuu3w4ad5xkQMqpXmKbW4otP4&s=5SkbxXeYM7Oag11E83WE10OSZp6ppgixTXsPY3nN1iU&e=
The trick to getting rid of long delays is more a function of
preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
Thanks. Maybe when I find, make, or build a better place to work, I'll be able to tackle some of that. I wouldn't be surprised if a VM is as easy as a compiler once one actually starts working on it.
On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) pbpublist@gmail.com wrote:
On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote:
I used Cuis at first to display hand written G-Codes in graphic form for a printed circuit board. I kept up with Cuis through a few versions and found a couple of bugs for Juan. Eventually Casey advised going to Squeak so I did. Perhaps my requests were getting annoying.
Perhaps you misinterpreted what Casey said? Definitely have all options (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in particular has a very active mailing lists and you'll find a lot of existing code to play with. I personally do most of my development in Cuis, some in Pharo (for things like Seaside that don't yet exist in Cuis), and a bit still in Squeak. They all have their place depending on your needs. Given your emphasis on performance, I would think that Cuis is going to be the place where you can maximize it. (all the above Smalltalk variants use essentially the same core VM, it's the plugins and images that really differ)
I'm mostly interested in using a multi-core Squeak with GC control for my robot. Tim says a multi-core VM is coming for the new Pi. He hasn't answered on GC control. With muliti-core a user need not see GC control but the system should provide 100% GC free service even if behind the scenes it momentarily toggles one GC off and lets the other complete.
Are you *sure* that's what Tim said? I see a thread where he's talking about *build* performance (i.e. compiling the C code for the VM) on a quad-core with the caveat 'even if Squeak can't directly take advantage' (i.e. no multi-core VM)
With real time driving, which I hope my robot will do some day, getting rid of all 100ms delays is vital.
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
On Sun, Jul 5, 2015 at 4:54 PM, Dan Norton dnorton@mindspring.com wrote: On 5 Jul 2015 at 16:22, Kirk Fraser wrote:
> > We should ask why do people want to teach Python instead of > Smalltalk? Why do people veer > away from Smalltalk with add-ons like Etoys, Scratch, and many other > paradigms like Patterns > and CRC cards, which aren't as good for commercial programming, thus > really aren't as good to > teach children? What can be done to remodel Squeak to provide all > the features more > commercially popular languages have? > > Earlier a post saying a boss didn't want a GUI that a combination of > buttons would bring up all > sorts of things his employees shouldn't be playing with. So put a > cleaner commercial GUI on the > list. Maybe the preferences switch could be in its own file or as > the first character in Sources to > reduce file count. The Changes file shouldn't be needed in a > deployed application. Is there any > way to cut the deployment image down to one file containing both the > Sources and VM like an > .exe in any other language? > > I've written on the need to fix Garbage Collection control so it can > be turned off like Python allows > to enable Squeak to be used for real time projects like self driving > cars, since a 100ms delay can > veer 8 feet off course, fully into a lane of oncoming traffic. > > Recently I learned from a UC Berkeley website it takes 100ms to > recognize the objects in a > picture too. Does that mean the future will have a cloud in every > car and Squeak needing to > conduct image analysis in hundreds of cooperating cores to get safe > real time performance? > > The state of Squeak for all its benefits seems like a collection of > law statutes, a big set of text > contributed by years of legislation that nobody can remember all of > and some of which makes little > sense. Maybe a major rewrite starting from zero would help? > " like a collection of law statutes" is a good analogy. Cuis seems like a major rewrite of Squeak and is simpler, easier to understand. What do you think of Cuis? > The GUI - while it has many nice features, it somehow seems to lack > the crisp precision, ease, > and speed of commercial software like Solidworks. I like how > Squeak comes up and is ready to > go far quicker than say Amazon's Audible application but Squeak > graphics aren't so fast or easy > to program as Solidworks. > > Recently I saw a couple of short videos on two moderate size robots > where users extolled their > ease of programming. Perhaps Smalltalk needs a new top level rule > based language to improve > programmer efficiency. I'm working on this one. And as my > prototype was so easy, it angers me > to think of all the time I spent being both ignorant and afraid > after seeing various compiler books > like the "Dragon Book" intentionally make compiler writing a > difficult graduate level course instead > of an easy advanced beginner level assignment. > > But one thing I have in common with my Raspberry Pi, when my > utilization is maxed for too long, I > overheat and shut down. I can write simple stuff like this when > it's too hot to do real work. But > even multiple cores get too hot when they are maxed out. So a real > time computer needs heat > control or cooling overkill in case a vital complex situation clogs > the bandwidth. Well, pray about > it. - Dan _______________________________________________ Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.squeakfoundation.org_mailman_listinfo_beginners&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=1b7HP4jiYqv2J9GhsFL1RVTPCo7Df2jBmDU6HdJUM3Y&m=pRfWHn5kQQDUpmTGXHnuu3w4ad5xkQMqpXmKbW4otP4&s=Pcp64tri1CC-SEK3dbt5MDDEQSf-hmDCPocWxsWY87I&e=
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Ralph Johnson,
That's an excellent suggestion and an excellent story, thank you very much! Letting the human interface in Smalltalk program the robot controller instead of being the robot controller sounds good.
My robot uses a network of Parallax microcontroller chips to drive hydraulic valves, which can be programmed via USB for simple tasks like moving one joint from point A to B but since each controller has 8 cores more complex tasks like grasping or walking can be done on the MCU's or on a small Raspberry Pi or other hardware in a non-GC or controllable GC language.
A harder part to wrap my head around is handling the webcam vision system and artificial intelligence while remaining time sensitive enough to do time critical tasks like cartwheels and other acrobatic choreography.
I know in effect my human mind shuts down most of its intellectual pursuits when engaged in heavy physical activity - maybe the robot must do the same - think more creatively when idling and pay closer attention while working. That takes care of the Ai timing.
The heavy load of vision processing appears to need a mini-cloud of cores to reduce time to identify and measure objects from contours and other information. To guarantee performance they would also need to run a non-GC language that could be programmed from Squeak interactively as new objects are being learned. I haven't worked with a laser range finder but I suspect they use it to narrow the focus onto moving objects to process video in more detail in those areas.
The current buzzword "co-robots" meaning robots that work beside or cooperatively with people working in symbiotic relationships with human partners suggests everyone will need a robot friend, which will require an artificial intelligence capable of intelligent thought. As most Americans are Christian it would make sense for a human compatible AI to be based on the Bible. That is what I would love to work on. But that level of thought needs a creative CG environment like Squeak at present.
I've been thinking that using a Smalltalk GUI to issue command rules to set an agenda for automatic text analysis and editing might be fun, letting the computer do the editing instead of me. That way it could update the AI knowledge like when a preferred synonym is discovered, without taking human time to do much of it beyond the setup.
Your wikipedia entry shows a webpage and blog that apparently are dead links. Would you be interested in being a team member on my SBIR/STTR grant application(s) for AI and Robots responding to: http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf15505 I've enlisted help in writing the application from Oregon's Small Business Development Center and will meet with an SBIR road trip in August I'm told. (I was also told I need a Ph.D. on my team since I don't have one.)
Kirk Fraser
On Mon, Jul 6, 2015 at 4:19 AM, Ralph Johnson johnson@cs.uiuc.edu wrote:
Here is another possibility.
Take a look at Symbolic Sound, a company that makes a system called Kyma. http://kyma.symbolicsound.com/
This company has been around for over twenty years. Its product has always been the fastest music synthesis system in the world that gives you total control over your sound. And by "total", I mean it gives you the ability to mathematically specify each sound wave. If you want, which is actually too much detail for most people. And it is all written in Smalltalk. Not Squeak, of course, since Squeak wasn't around then. But it could have been done in Squeak. And perhaps they ported it to Squeak. I haven't talked to them for a long time so I don't know what they did, but from the screen shots I think it is still a very old version of VisualWorks.
Anyway, how do they make it so fast? How can they make something that can be used for hours without any GC pauses?
The trick is that the sound is produced on an attached DSP. The GUI is in Smalltalk on a PC, and it generates code for the DSP. It is non-trivial making the compiler so fast that when you press "play", it can immediately start up the DSP and start producing sound. It does this (rather, it did this, since they might have changed the way it works) by just producing enough code to run the DSP for a few seconds and then starting the DSP while it generates the rest of the code. Kyma literally is writing the program into DSP memory at the same time as the DSP is running the program, producing sound.
Anyway, maybe that is the right approach to programming robots. You don't even need to use two computers. Imagine you had two computers, one running Squeak and the other a simple, real-time machine designed for controlling robots, but not very sophisticated. Squeak programs the simple computer, and can change its program dynamically. The simple computer has no gc. Since Squeak is a VM on a computer, the real-time computer can be a VM, too. So, you could be running them both on your PC, or you could run them on two separate computers for better performance.
I would be happy to talk more about this. But I'd like to talk about the beginning of Kyma. The owners of Symbolic Sound are Carla Scaletti and Kurt Hebel. Carla has a PhD in music, and Kurt in Electrical Engineering. I met Carla after she had her PhD. She wanted to get a MS in computer science so she could prove her computer music expertise, and she ended up getting it with me. She took my course on OOP&D that used Smalltalk. For her class project (back in 1987, I think) she wrote a Smalltalk program that ran on the Mac and that produced about ten seconds of sound, but it took several minutes to do it. Hardly real time. However, she was used to using a supercomputer (a Cray?) to generate sounds that still weren't real time, so she was very pleased that she could do it on the Mac at all, and though Smalltalk was slower than Fortran, in her opinion the ease of use was so great that she didn't mind the speed difference. As she put it, the speed difference between a Mac and a Cray was bigger than between Smalltalk and Fortran. She ended up turning this into the first version of Kyma and that became the subject of her MS thesis. I can remember when she showed it in class. She was the only woman in the class, and the other students knew she was a musician, i.e. not *really* a programmer. She was quiet during class, so they had not had a chance to have their prejudices remedied. Her demo at the end of the semester blew them away.
Kurt had built a DSP that their lab used. (The lab was part of the Plato project, I believe, one of the huge number of creative results of this very significant project at Illinois.) It was called the Capybara. This was before the time when you could just buy a good DSP on a chip, but that time came very soon and then they used the commercial chips. For her MS, she converted her system to use the Capybara, and this was when she figured out how to make it start making music within a fraction of a second of pressing the "play" button. Kurt also used Smalltalk with the Capybara. His PhD was about automatically designing digital filters, and his software also generated code for the Capybara, though it was actually quite different from Kyma.
The two of them worked on several different projects over the next few years, but kept improving Kyma. Along the way Kurt started building boards that had several commercial DSPs on them. Eventually they decided to go commercial and started Symbolic Sound.
-Ralph Johnson
On Sun, Jul 5, 2015 at 9:05 PM, Kirk Fraser overcomer.man@gmail.com wrote:
Tim says a multi-core VM is coming for the new Pi.
Are you *sure* that's what Tim said?
Of course my over hopeful misinterpretation is possible.
"Squeak runs quite well on a Pi, especially a pi2 - and we're working on the Cog dynamic translation VM right now, which should with luck triple typical performance." - timrowledge » Thu Feb 19, 2015
https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698... https://urldefense.proofpoint.com/v2/url?u=https-3A__www.raspberrypi.org_forums_viewtopic.php-3Ff-3D63-26t-3D100804-26p-3D698818-26hilit-3DSqueak-23p698818&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=1b7HP4jiYqv2J9GhsFL1RVTPCo7Df2jBmDU6HdJUM3Y&m=pRfWHn5kQQDUpmTGXHnuu3w4ad5xkQMqpXmKbW4otP4&s=5SkbxXeYM7Oag11E83WE10OSZp6ppgixTXsPY3nN1iU&e=
The trick to getting rid of long delays is more a function of
preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
Thanks. Maybe when I find, make, or build a better place to work, I'll be able to tackle some of that. I wouldn't be surprised if a VM is as easy as a compiler once one actually starts working on it.
On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) pbpublist@gmail.com wrote:
On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote:
I used Cuis at first to display hand written G-Codes in graphic form for a printed circuit board. I kept up with Cuis through a few versions and found a couple of bugs for Juan. Eventually Casey advised going to Squeak so I did. Perhaps my requests were getting annoying.
Perhaps you misinterpreted what Casey said? Definitely have all options (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in particular has a very active mailing lists and you'll find a lot of existing code to play with. I personally do most of my development in Cuis, some in Pharo (for things like Seaside that don't yet exist in Cuis), and a bit still in Squeak. They all have their place depending on your needs. Given your emphasis on performance, I would think that Cuis is going to be the place where you can maximize it. (all the above Smalltalk variants use essentially the same core VM, it's the plugins and images that really differ)
I'm mostly interested in using a multi-core Squeak with GC control for my robot. Tim says a multi-core VM is coming for the new Pi. He hasn't answered on GC control. With muliti-core a user need not see GC control but the system should provide 100% GC free service even if behind the scenes it momentarily toggles one GC off and lets the other complete.
Are you *sure* that's what Tim said? I see a thread where he's talking about *build* performance (i.e. compiling the C code for the VM) on a quad-core with the caveat 'even if Squeak can't directly take advantage' (i.e. no multi-core VM)
With real time driving, which I hope my robot will do some day, getting rid of all 100ms delays is vital.
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
On Sun, Jul 5, 2015 at 4:54 PM, Dan Norton dnorton@mindspring.com wrote: On 5 Jul 2015 at 16:22, Kirk Fraser wrote:
> > We should ask why do people want to teach Python instead of > Smalltalk? Why do people veer > away from Smalltalk with add-ons like Etoys, Scratch, and many other > paradigms like Patterns > and CRC cards, which aren't as good for commercial programming, thus > really aren't as good to > teach children? What can be done to remodel Squeak to provide all > the features more > commercially popular languages have? > > Earlier a post saying a boss didn't want a GUI that a combination of > buttons would bring up all > sorts of things his employees shouldn't be playing with. So put a > cleaner commercial GUI on the > list. Maybe the preferences switch could be in its own file or as > the first character in Sources to > reduce file count. The Changes file shouldn't be needed in a > deployed application. Is there any > way to cut the deployment image down to one file containing both the > Sources and VM like an > .exe in any other language? > > I've written on the need to fix Garbage Collection control so it can > be turned off like Python allows > to enable Squeak to be used for real time projects like self driving > cars, since a 100ms delay can > veer 8 feet off course, fully into a lane of oncoming traffic. > > Recently I learned from a UC Berkeley website it takes 100ms to > recognize the objects in a > picture too. Does that mean the future will have a cloud in every > car and Squeak needing to > conduct image analysis in hundreds of cooperating cores to get safe > real time performance? > > The state of Squeak for all its benefits seems like a collection of > law statutes, a big set of text > contributed by years of legislation that nobody can remember all of > and some of which makes little > sense. Maybe a major rewrite starting from zero would help? > " like a collection of law statutes" is a good analogy. Cuis seems like a major rewrite of Squeak and is simpler, easier to understand. What do you think of Cuis? > The GUI - while it has many nice features, it somehow seems to lack > the crisp precision, ease, > and speed of commercial software like Solidworks. I like how > Squeak comes up and is ready to > go far quicker than say Amazon's Audible application but Squeak > graphics aren't so fast or easy > to program as Solidworks. > > Recently I saw a couple of short videos on two moderate size robots > where users extolled their > ease of programming. Perhaps Smalltalk needs a new top level rule > based language to improve > programmer efficiency. I'm working on this one. And as my > prototype was so easy, it angers me > to think of all the time I spent being both ignorant and afraid > after seeing various compiler books > like the "Dragon Book" intentionally make compiler writing a > difficult graduate level course instead > of an easy advanced beginner level assignment. > > But one thing I have in common with my Raspberry Pi, when my > utilization is maxed for too long, I > overheat and shut down. I can write simple stuff like this when > it's too hot to do real work. But > even multiple cores get too hot when they are maxed out. So a real > time computer needs heat > control or cooling overkill in case a vital complex situation clogs > the bandwidth. Well, pray about > it. - Dan _______________________________________________ Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.squeakfoundation.org_mailman_listinfo_beginners&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=1b7HP4jiYqv2J9GhsFL1RVTPCo7Df2jBmDU6HdJUM3Y&m=pRfWHn5kQQDUpmTGXHnuu3w4ad5xkQMqpXmKbW4otP4&s=Pcp64tri1CC-SEK3dbt5MDDEQSf-hmDCPocWxsWY87I&e=
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Hey Kirk,
I like Ralph's suggestion of doing the time/timing specific stuff on a dedicated microcontroller.
I'd recommend going one better: use more than one microcontroller. Robots need to do a lot in parallel; if the robot has to stop driving in order to think, that's a problem (although the converse would be decidedly human!) Anyway, it sounds like real-time is not negotiable in your view, so green threads won't cut it either.
Mine has... six controllers in total. That's not counting the ARM9 which is more like a full computer (e.g., Linux.)
I think six anyway. Could be more hiding in there. Two drive sensors, three drive motors, one is wired up close to the ARM board to coordinate the other controllers on behalf of what the Linux system wants them doing.
I'm curious, have you figured out what the average, best, and worst case latencies are on human reflexes? In my view, matching or beating that benchmark is where the money probably is.
--C
On Jul 6, 2015, at 12:39 PM, Kirk Fraser overcomer.man@gmail.com wrote:
Ralph Johnson,
That's an excellent suggestion and an excellent story, thank you very much! Letting the human interface in Smalltalk program the robot controller instead of being the robot controller sounds good.
My robot uses a network of Parallax microcontroller chips to drive hydraulic valves, which can be programmed via USB for simple tasks like moving one joint from point A to B but since each controller has 8 cores more complex tasks like grasping or walking can be done on the MCU's or on a small Raspberry Pi or other hardware in a non-GC or controllable GC language.
A harder part to wrap my head around is handling the webcam vision system and artificial intelligence while remaining time sensitive enough to do time critical tasks like cartwheels and other acrobatic choreography.
I know in effect my human mind shuts down most of its intellectual pursuits when engaged in heavy physical activity - maybe the robot must do the same - think more creatively when idling and pay closer attention while working. That takes care of the Ai timing.
The heavy load of vision processing appears to need a mini-cloud of cores to reduce time to identify and measure objects from contours and other information. To guarantee performance they would also need to run a non-GC language that could be programmed from Squeak interactively as new objects are being learned. I haven't worked with a laser range finder but I suspect they use it to narrow the focus onto moving objects to process video in more detail in those areas.
The current buzzword "co-robots" meaning robots that work beside or cooperatively with people working in symbiotic relationships with human partners suggests everyone will need a robot friend, which will require an artificial intelligence capable of intelligent thought. As most Americans are Christian it would make sense for a human compatible AI to be based on the Bible. That is what I would love to work on. But that level of thought needs a creative CG environment like Squeak at present.
I've been thinking that using a Smalltalk GUI to issue command rules to set an agenda for automatic text analysis and editing might be fun, letting the computer do the editing instead of me. That way it could update the AI knowledge like when a preferred synonym is discovered, without taking human time to do much of it beyond the setup.
Your wikipedia entry shows a webpage and blog that apparently are dead links. Would you be interested in being a team member on my SBIR/STTR grant application(s) for AI and Robots responding to: http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf15505 I've enlisted help in writing the application from Oregon's Small Business Development Center and will meet with an SBIR road trip in August I'm told. (I was also told I need a Ph.D. on my team since I don't have one.)
Kirk Fraser
On Mon, Jul 6, 2015 at 4:19 AM, Ralph Johnson johnson@cs.uiuc.edu wrote: Here is another possibility.
Take a look at Symbolic Sound, a company that makes a system called Kyma. http://kyma.symbolicsound.com/
This company has been around for over twenty years. Its product has always been the fastest music synthesis system in the world that gives you total control over your sound. And by "total", I mean it gives you the ability to mathematically specify each sound wave. If you want, which is actually too much detail for most people. And it is all written in Smalltalk. Not Squeak, of course, since Squeak wasn't around then. But it could have been done in Squeak. And perhaps they ported it to Squeak. I haven't talked to them for a long time so I don't know what they did, but from the screen shots I think it is still a very old version of VisualWorks.
Anyway, how do they make it so fast? How can they make something that can be used for hours without any GC pauses?
The trick is that the sound is produced on an attached DSP. The GUI is in Smalltalk on a PC, and it generates code for the DSP. It is non-trivial making the compiler so fast that when you press "play", it can immediately start up the DSP and start producing sound. It does this (rather, it did this, since they might have changed the way it works) by just producing enough code to run the DSP for a few seconds and then starting the DSP while it generates the rest of the code. Kyma literally is writing the program into DSP memory at the same time as the DSP is running the program, producing sound.
Anyway, maybe that is the right approach to programming robots. You don't even need to use two computers. Imagine you had two computers, one running Squeak and the other a simple, real-time machine designed for controlling robots, but not very sophisticated. Squeak programs the simple computer, and can change its program dynamically. The simple computer has no gc. Since Squeak is a VM on a computer, the real-time computer can be a VM, too. So, you could be running them both on your PC, or you could run them on two separate computers for better performance.
I would be happy to talk more about this. But I'd like to talk about the beginning of Kyma. The owners of Symbolic Sound are Carla Scaletti and Kurt Hebel. Carla has a PhD in music, and Kurt in Electrical Engineering. I met Carla after she had her PhD. She wanted to get a MS in computer science so she could prove her computer music expertise, and she ended up getting it with me. She took my course on OOP&D that used Smalltalk. For her class project (back in 1987, I think) she wrote a Smalltalk program that ran on the Mac and that produced about ten seconds of sound, but it took several minutes to do it. Hardly real time. However, she was used to using a supercomputer (a Cray?) to generate sounds that still weren't real time, so she was very pleased that she could do it on the Mac at all, and though Smalltalk was slower than Fortran, in her opinion the ease of use was so great that she didn't mind the speed difference. As she put it, the speed difference between a Mac and a Cray was bigger than between Smalltalk and Fortran. She ended up turning this into the first version of Kyma and that became the subject of her MS thesis. I can remember when she showed it in class. She was the only woman in the class, and the other students knew she was a musician, i.e. not *really* a programmer. She was quiet during class, so they had not had a chance to have their prejudices remedied. Her demo at the end of the semester blew them away.
Kurt had built a DSP that their lab used. (The lab was part of the Plato project, I believe, one of the huge number of creative results of this very significant project at Illinois.) It was called the Capybara. This was before the time when you could just buy a good DSP on a chip, but that time came very soon and then they used the commercial chips. For her MS, she converted her system to use the Capybara, and this was when she figured out how to make it start making music within a fraction of a second of pressing the "play" button. Kurt also used Smalltalk with the Capybara. His PhD was about automatically designing digital filters, and his software also generated code for the Capybara, though it was actually quite different from Kyma.
The two of them worked on several different projects over the next few years, but kept improving Kyma. Along the way Kurt started building boards that had several commercial DSPs on them. Eventually they decided to go commercial and started Symbolic Sound.
-Ralph Johnson
On Sun, Jul 5, 2015 at 9:05 PM, Kirk Fraser overcomer.man@gmail.com wrote:
Tim says a multi-core VM is coming for the new Pi.
Are you *sure* that's what Tim said?
Of course my over hopeful misinterpretation is possible.
"Squeak runs quite well on a Pi, especially a pi2 - and we're working on the Cog dynamic translation VM right now, which should with luck triple typical performance." - timrowledge » Thu Feb 19, 2015 https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698...
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I
recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
Thanks. Maybe when I find, make, or build a better place to work, I'll be able to tackle some of that. I wouldn't be surprised if a VM is as easy as a compiler once one actually starts working on it.
On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) pbpublist@gmail.com wrote: On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote:
I used Cuis at first to display hand written G-Codes in graphic form for a printed circuit board. I kept up with Cuis through a few versions and found a couple of bugs for Juan. Eventually Casey advised going to Squeak so I did. Perhaps my requests were getting annoying.
Perhaps you misinterpreted what Casey said? Definitely have all options (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in particular has a very active mailing lists and you'll find a lot of existing code to play with. I personally do most of my development in Cuis, some in Pharo (for things like Seaside that don't yet exist in Cuis), and a bit still in Squeak. They all have their place depending on your needs. Given your emphasis on performance, I would think that Cuis is going to be the place where you can maximize it. (all the above Smalltalk variants use essentially the same core VM, it's the plugins and images that really differ)
I'm mostly interested in using a multi-core Squeak with GC control for my robot. Tim says a multi-core VM is coming for the new Pi. He hasn't answered on GC control. With muliti-core a user need not see GC control but the system should provide 100% GC free service even if behind the scenes it momentarily toggles one GC off and lets the other complete.
Are you *sure* that's what Tim said? I see a thread where he's talking about *build* performance (i.e. compiling the C code for the VM) on a quad-core with the caveat 'even if Squeak can't directly take advantage' (i.e. no multi-core VM)
With real time driving, which I hope my robot will do some day, getting rid of all 100ms delays is vital.
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
On Sun, Jul 5, 2015 at 4:54 PM, Dan Norton dnorton@mindspring.com wrote: On 5 Jul 2015 at 16:22, Kirk Fraser wrote:
> > We should ask why do people want to teach Python instead of > Smalltalk? Why do people veer > away from Smalltalk with add-ons like Etoys, Scratch, and many other > paradigms like Patterns > and CRC cards, which aren't as good for commercial programming, thus > really aren't as good to > teach children? What can be done to remodel Squeak to provide all > the features more > commercially popular languages have? > > Earlier a post saying a boss didn't want a GUI that a combination of > buttons would bring up all > sorts of things his employees shouldn't be playing with. So put a > cleaner commercial GUI on the > list. Maybe the preferences switch could be in its own file or as > the first character in Sources to > reduce file count. The Changes file shouldn't be needed in a > deployed application. Is there any > way to cut the deployment image down to one file containing both the > Sources and VM like an > .exe in any other language? > > I've written on the need to fix Garbage Collection control so it can > be turned off like Python allows > to enable Squeak to be used for real time projects like self driving > cars, since a 100ms delay can > veer 8 feet off course, fully into a lane of oncoming traffic. > > Recently I learned from a UC Berkeley website it takes 100ms to > recognize the objects in a > picture too. Does that mean the future will have a cloud in every > car and Squeak needing to > conduct image analysis in hundreds of cooperating cores to get safe > real time performance? > > The state of Squeak for all its benefits seems like a collection of > law statutes, a big set of text > contributed by years of legislation that nobody can remember all of > and some of which makes little > sense. Maybe a major rewrite starting from zero would help? > " like a collection of law statutes" is a good analogy. Cuis seems like a major rewrite of Squeak and is simpler, easier to understand. What do you think of Cuis? > The GUI - while it has many nice features, it somehow seems to lack > the crisp precision, ease, > and speed of commercial software like Solidworks. I like how > Squeak comes up and is ready to > go far quicker than say Amazon's Audible application but Squeak > graphics aren't so fast or easy > to program as Solidworks. > > Recently I saw a couple of short videos on two moderate size robots > where users extolled their > ease of programming. Perhaps Smalltalk needs a new top level rule > based language to improve > programmer efficiency. I'm working on this one. And as my > prototype was so easy, it angers me > to think of all the time I spent being both ignorant and afraid > after seeing various compiler books > like the "Dragon Book" intentionally make compiler writing a > difficult graduate level course instead > of an easy advanced beginner level assignment. > > But one thing I have in common with my Raspberry Pi, when my > utilization is maxed for too long, I > overheat and shut down. I can write simple stuff like this when > it's too hot to do real work. But > even multiple cores get too hot when they are maxed out. So a real > time computer needs heat > control or cooling overkill in case a vital complex situation clogs > the bandwidth. Well, pray about > it. - Dan _______________________________________________ Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Hi Casey,
Thanks for the suggestion. I will have multiple connected controller boards and with your suggestion maybe I'll try a Pi for each limb and each webcam or maybe your ARM9 suggestion.
To prove it's good as a human in performance I want it to do minor acrobatics like cartwheels and balancing tricks, maybe a Chuck Norris kick or jumping over a fence with one hand on a post. Or like those free-running videos. Stuff I could not do myself. But it all waits on money. Maybe I'll make better progress next year when social security kicks in.
As far as human performance goals, one professor wrote it takes 200 cycles per second to hop on one leg. Somewhere I read human touch can sense 1 in 32,000 of an inch. I don't have the other figures yet. I may not be able to drive an arm as fast as a human boxer - 200 mph but as long as it's fast enough to drive a vehicle on a slow road (not I5) that might be enough until faster computers are here.
The vision system seems like a major speed bottle neck. Maybe a mini-cloud can take one 64th of the image for each processor analyze it, then assemble larger object detection in time for the next frame. The DARPA Atlas robot used 4 cores for each camera I think. But a mini-cloud set off nearby to process vision and return either the objects with measurements or instructions would be a lot of work. The more I write, the more I see why the head of DARPA's robots said they cost 1-2 million as you have to hire a team of programmers or make a really intricate learning program.
Kirk
On Mon, Jul 13, 2015 at 5:58 PM, Casey Ransberger casey.obrien.r@gmail.com wrote:
Hey Kirk,
I like Ralph's suggestion of doing the time/timing specific stuff on a dedicated microcontroller.
I'd recommend going one better: use more than one microcontroller. Robots need to do a lot in parallel; if the robot has to stop driving in order to think, that's a problem (although the converse would be decidedly human!) Anyway, it sounds like real-time is not negotiable in your view, so green threads won't cut it either.
Mine has... six controllers in total. That's not counting the ARM9 which is more like a full computer (e.g., Linux.)
I think six anyway. Could be more hiding in there. Two drive sensors, three drive motors, one is wired up close to the ARM board to coordinate the other controllers on behalf of what the Linux system wants them doing.
I'm curious, have you figured out what the average, best, and worst case latencies are on human reflexes? In my view, matching or beating that benchmark is where the money probably is.
--C
On Jul 6, 2015, at 12:39 PM, Kirk Fraser overcomer.man@gmail.com wrote:
Ralph Johnson,
That's an excellent suggestion and an excellent story, thank you very much! Letting the human interface in Smalltalk program the robot controller instead of being the robot controller sounds good.
My robot uses a network of Parallax microcontroller chips to drive hydraulic valves, which can be programmed via USB for simple tasks like moving one joint from point A to B but since each controller has 8 cores more complex tasks like grasping or walking can be done on the MCU's or on a small Raspberry Pi or other hardware in a non-GC or controllable GC language.
A harder part to wrap my head around is handling the webcam vision system and artificial intelligence while remaining time sensitive enough to do time critical tasks like cartwheels and other acrobatic choreography.
I know in effect my human mind shuts down most of its intellectual pursuits when engaged in heavy physical activity - maybe the robot must do the same - think more creatively when idling and pay closer attention while working. That takes care of the Ai timing.
The heavy load of vision processing appears to need a mini-cloud of cores to reduce time to identify and measure objects from contours and other information. To guarantee performance they would also need to run a non-GC language that could be programmed from Squeak interactively as new objects are being learned. I haven't worked with a laser range finder but I suspect they use it to narrow the focus onto moving objects to process video in more detail in those areas.
The current buzzword "co-robots" meaning robots that work beside or cooperatively with people working in symbiotic relationships with human partners suggests everyone will need a robot friend, which will require an artificial intelligence capable of intelligent thought. As most Americans are Christian it would make sense for a human compatible AI to be based on the Bible. That is what I would love to work on. But that level of thought needs a creative CG environment like Squeak at present.
I've been thinking that using a Smalltalk GUI to issue command rules to set an agenda for automatic text analysis and editing might be fun, letting the computer do the editing instead of me. That way it could update the AI knowledge like when a preferred synonym is discovered, without taking human time to do much of it beyond the setup.
Your wikipedia entry shows a webpage and blog that apparently are dead links. Would you be interested in being a team member on my SBIR/STTR grant application(s) for AI and Robots responding to: http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf15505 I've enlisted help in writing the application from Oregon's Small Business Development Center and will meet with an SBIR road trip in August I'm told. (I was also told I need a Ph.D. on my team since I don't have one.)
Kirk Fraser
On Mon, Jul 6, 2015 at 4:19 AM, Ralph Johnson johnson@cs.uiuc.edu wrote:
Here is another possibility.
Take a look at Symbolic Sound, a company that makes a system called Kyma. http://kyma.symbolicsound.com/
This company has been around for over twenty years. Its product has always been the fastest music synthesis system in the world that gives you total control over your sound. And by "total", I mean it gives you the ability to mathematically specify each sound wave. If you want, which is actually too much detail for most people. And it is all written in Smalltalk. Not Squeak, of course, since Squeak wasn't around then. But it could have been done in Squeak. And perhaps they ported it to Squeak. I haven't talked to them for a long time so I don't know what they did, but from the screen shots I think it is still a very old version of VisualWorks.
Anyway, how do they make it so fast? How can they make something that can be used for hours without any GC pauses?
The trick is that the sound is produced on an attached DSP. The GUI is in Smalltalk on a PC, and it generates code for the DSP. It is non-trivial making the compiler so fast that when you press "play", it can immediately start up the DSP and start producing sound. It does this (rather, it did this, since they might have changed the way it works) by just producing enough code to run the DSP for a few seconds and then starting the DSP while it generates the rest of the code. Kyma literally is writing the program into DSP memory at the same time as the DSP is running the program, producing sound.
Anyway, maybe that is the right approach to programming robots. You don't even need to use two computers. Imagine you had two computers, one running Squeak and the other a simple, real-time machine designed for controlling robots, but not very sophisticated. Squeak programs the simple computer, and can change its program dynamically. The simple computer has no gc. Since Squeak is a VM on a computer, the real-time computer can be a VM, too. So, you could be running them both on your PC, or you could run them on two separate computers for better performance.
I would be happy to talk more about this. But I'd like to talk about the beginning of Kyma. The owners of Symbolic Sound are Carla Scaletti and Kurt Hebel. Carla has a PhD in music, and Kurt in Electrical Engineering. I met Carla after she had her PhD. She wanted to get a MS in computer science so she could prove her computer music expertise, and she ended up getting it with me. She took my course on OOP&D that used Smalltalk. For her class project (back in 1987, I think) she wrote a Smalltalk program that ran on the Mac and that produced about ten seconds of sound, but it took several minutes to do it. Hardly real time. However, she was used to using a supercomputer (a Cray?) to generate sounds that still weren't real time, so she was very pleased that she could do it on the Mac at all, and though Smalltalk was slower than Fortran, in her opinion the ease of use was so great that she didn't mind the speed difference. As she put it, the speed difference between a Mac and a Cray was bigger than between Smalltalk and Fortran. She ended up turning this into the first version of Kyma and that became the subject of her MS thesis. I can remember when she showed it in class. She was the only woman in the class, and the other students knew she was a musician, i.e. not *really* a programmer. She was quiet during class, so they had not had a chance to have their prejudices remedied. Her demo at the end of the semester blew them away.
Kurt had built a DSP that their lab used. (The lab was part of the Plato project, I believe, one of the huge number of creative results of this very significant project at Illinois.) It was called the Capybara. This was before the time when you could just buy a good DSP on a chip, but that time came very soon and then they used the commercial chips. For her MS, she converted her system to use the Capybara, and this was when she figured out how to make it start making music within a fraction of a second of pressing the "play" button. Kurt also used Smalltalk with the Capybara. His PhD was about automatically designing digital filters, and his software also generated code for the Capybara, though it was actually quite different from Kyma.
The two of them worked on several different projects over the next few years, but kept improving Kyma. Along the way Kurt started building boards that had several commercial DSPs on them. Eventually they decided to go commercial and started Symbolic Sound.
-Ralph Johnson
On Sun, Jul 5, 2015 at 9:05 PM, Kirk Fraser overcomer.man@gmail.com wrote:
Tim says a multi-core VM is coming for the new Pi.
Are you *sure* that's what Tim said?
Of course my over hopeful misinterpretation is possible.
"Squeak runs quite well on a Pi, especially a pi2 - and we're working on the Cog dynamic translation VM right now, which should with luck triple typical performance." - timrowledge » Thu Feb 19, 2015
https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698... https://urldefense.proofpoint.com/v2/url?u=https-3A__www.raspberrypi.org_forums_viewtopic.php-3Ff-3D63-26t-3D100804-26p-3D698818-26hilit-3DSqueak-23p698818&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=1b7HP4jiYqv2J9GhsFL1RVTPCo7Df2jBmDU6HdJUM3Y&m=pRfWHn5kQQDUpmTGXHnuu3w4ad5xkQMqpXmKbW4otP4&s=5SkbxXeYM7Oag11E83WE10OSZp6ppgixTXsPY3nN1iU&e=
The trick to getting rid of long delays is more a function of
preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
Thanks. Maybe when I find, make, or build a better place to work, I'll be able to tackle some of that. I wouldn't be surprised if a VM is as easy as a compiler once one actually starts working on it.
On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) pbpublist@gmail.com wrote:
On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote:
I used Cuis at first to display hand written G-Codes in graphic form for a printed circuit board. I kept up with Cuis through a few versions and found a couple of bugs for Juan. Eventually Casey advised going to Squeak so I did. Perhaps my requests were getting annoying.
Perhaps you misinterpreted what Casey said? Definitely have all options (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in particular has a very active mailing lists and you'll find a lot of existing code to play with. I personally do most of my development in Cuis, some in Pharo (for things like Seaside that don't yet exist in Cuis), and a bit still in Squeak. They all have their place depending on your needs. Given your emphasis on performance, I would think that Cuis is going to be the place where you can maximize it. (all the above Smalltalk variants use essentially the same core VM, it's the plugins and images that really differ)
I'm mostly interested in using a multi-core Squeak with GC control for my robot. Tim says a multi-core VM is coming for the new Pi. He hasn't answered on GC control. With muliti-core a user need not see GC control but the system should provide 100% GC free service even if behind the scenes it momentarily toggles one GC off and lets the other complete.
Are you *sure* that's what Tim said? I see a thread where he's talking about *build* performance (i.e. compiling the C code for the VM) on a quad-core with the caveat 'even if Squeak can't directly take advantage' (i.e. no multi-core VM)
With real time driving, which I hope my robot will do some day, getting rid of all 100ms delays is vital.
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
On Sun, Jul 5, 2015 at 4:54 PM, Dan Norton dnorton@mindspring.com wrote: On 5 Jul 2015 at 16:22, Kirk Fraser wrote:
> > We should ask why do people want to teach Python instead of > Smalltalk? Why do people veer > away from Smalltalk with add-ons like Etoys, Scratch, and many other > paradigms like Patterns > and CRC cards, which aren't as good for commercial programming, thus > really aren't as good to > teach children? What can be done to remodel Squeak to provide all > the features more > commercially popular languages have? > > Earlier a post saying a boss didn't want a GUI that a combination of > buttons would bring up all > sorts of things his employees shouldn't be playing with. So put a > cleaner commercial GUI on the > list. Maybe the preferences switch could be in its own file or as > the first character in Sources to > reduce file count. The Changes file shouldn't be needed in a > deployed application. Is there any > way to cut the deployment image down to one file containing both the > Sources and VM like an > .exe in any other language? > > I've written on the need to fix Garbage Collection control so it can > be turned off like Python allows > to enable Squeak to be used for real time projects like self driving > cars, since a 100ms delay can > veer 8 feet off course, fully into a lane of oncoming traffic. > > Recently I learned from a UC Berkeley website it takes 100ms to > recognize the objects in a > picture too. Does that mean the future will have a cloud in every > car and Squeak needing to > conduct image analysis in hundreds of cooperating cores to get safe > real time performance? > > The state of Squeak for all its benefits seems like a collection of > law statutes, a big set of text > contributed by years of legislation that nobody can remember all of > and some of which makes little > sense. Maybe a major rewrite starting from zero would help? > " like a collection of law statutes" is a good analogy. Cuis seems like a major rewrite of Squeak and is simpler, easier to understand. What do you think of Cuis? > The GUI - while it has many nice features, it somehow seems to lack > the crisp precision, ease, > and speed of commercial software like Solidworks. I like how > Squeak comes up and is ready to > go far quicker than say Amazon's Audible application but Squeak > graphics aren't so fast or easy > to program as Solidworks. > > Recently I saw a couple of short videos on two moderate size robots > where users extolled their > ease of programming. Perhaps Smalltalk needs a new top level rule > based language to improve > programmer efficiency. I'm working on this one. And as my > prototype was so easy, it angers me > to think of all the time I spent being both ignorant and afraid > after seeing various compiler books > like the "Dragon Book" intentionally make compiler writing a > difficult graduate level course instead > of an easy advanced beginner level assignment. > > But one thing I have in common with my Raspberry Pi, when my > utilization is maxed for too long, I > overheat and shut down. I can write simple stuff like this when > it's too hot to do real work. But > even multiple cores get too hot when they are maxed out. So a real > time computer needs heat > control or cooling overkill in case a vital complex situation clogs > the bandwidth. Well, pray about > it. - Dan _______________________________________________ Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.squeakfoundation.org_mailman_listinfo_beginners&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=1b7HP4jiYqv2J9GhsFL1RVTPCo7Df2jBmDU6HdJUM3Y&m=pRfWHn5kQQDUpmTGXHnuu3w4ad5xkQMqpXmKbW4otP4&s=Pcp64tri1CC-SEK3dbt5MDDEQSf-hmDCPocWxsWY87I&e=
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Smalltalk FPGA may be of interest...
http://www.slideshare.net/esug/luc-fabresse-iwst2014
http://esug.org/data/ESUG2014/IWST/Papers/iwst2014_From%20Smalltalk%20to%20S...
cheers -ben
On Tue, Jul 14, 2015 at 11:55 AM, Kirk Fraser overcomer.man@gmail.com wrote:
Hi Casey,
Thanks for the suggestion. I will have multiple connected controller boards and with your suggestion maybe I'll try a Pi for each limb and each webcam or maybe your ARM9 suggestion.
To prove it's good as a human in performance I want it to do minor acrobatics like cartwheels and balancing tricks, maybe a Chuck Norris kick or jumping over a fence with one hand on a post. Or like those free-running videos. Stuff I could not do myself. But it all waits on money. Maybe I'll make better progress next year when social security kicks in.
As far as human performance goals, one professor wrote it takes 200 cycles per second to hop on one leg. Somewhere I read human touch can sense 1 in 32,000 of an inch. I don't have the other figures yet. I may not be able to drive an arm as fast as a human boxer - 200 mph but as long as it's fast enough to drive a vehicle on a slow road (not I5) that might be enough until faster computers are here.
The vision system seems like a major speed bottle neck. Maybe a mini-cloud can take one 64th of the image for each processor analyze it, then assemble larger object detection in time for the next frame. The DARPA Atlas robot used 4 cores for each camera I think. But a mini-cloud set off nearby to process vision and return either the objects with measurements or instructions would be a lot of work. The more I write, the more I see why the head of DARPA's robots said they cost 1-2 million as you have to hire a team of programmers or make a really intricate learning program.
Kirk
On Mon, Jul 13, 2015 at 5:58 PM, Casey Ransberger casey.obrien.r@gmail.com wrote:
Hey Kirk,
I like Ralph's suggestion of doing the time/timing specific stuff on a dedicated microcontroller.
I'd recommend going one better: use more than one microcontroller. Robots need to do a lot in parallel; if the robot has to stop driving in order to think, that's a problem (although the converse would be decidedly human!) Anyway, it sounds like real-time is not negotiable in your view, so green threads won't cut it either.
Mine has... six controllers in total. That's not counting the ARM9 which is more like a full computer (e.g., Linux.)
I think six anyway. Could be more hiding in there. Two drive sensors, three drive motors, one is wired up close to the ARM board to coordinate the other controllers on behalf of what the Linux system wants them doing.
I'm curious, have you figured out what the average, best, and worst case latencies are on human reflexes? In my view, matching or beating that benchmark is where the money probably is.
--C
On Jul 6, 2015, at 12:39 PM, Kirk Fraser overcomer.man@gmail.com wrote:
Ralph Johnson,
That's an excellent suggestion and an excellent story, thank you very much! Letting the human interface in Smalltalk program the robot controller instead of being the robot controller sounds good.
My robot uses a network of Parallax microcontroller chips to drive hydraulic valves, which can be programmed via USB for simple tasks like moving one joint from point A to B but since each controller has 8 cores more complex tasks like grasping or walking can be done on the MCU's or on a small Raspberry Pi or other hardware in a non-GC or controllable GC language.
A harder part to wrap my head around is handling the webcam vision system and artificial intelligence while remaining time sensitive enough to do time critical tasks like cartwheels and other acrobatic choreography.
I know in effect my human mind shuts down most of its intellectual pursuits when engaged in heavy physical activity - maybe the robot must do the same - think more creatively when idling and pay closer attention while working. That takes care of the Ai timing.
The heavy load of vision processing appears to need a mini-cloud of cores to reduce time to identify and measure objects from contours and other information. To guarantee performance they would also need to run a non-GC language that could be programmed from Squeak interactively as new objects are being learned. I haven't worked with a laser range finder but I suspect they use it to narrow the focus onto moving objects to process video in more detail in those areas.
The current buzzword "co-robots" meaning robots that work beside or cooperatively with people working in symbiotic relationships with human partners suggests everyone will need a robot friend, which will require an artificial intelligence capable of intelligent thought. As most Americans are Christian it would make sense for a human compatible AI to be based on the Bible. That is what I would love to work on. But that level of thought needs a creative CG environment like Squeak at present.
I've been thinking that using a Smalltalk GUI to issue command rules to set an agenda for automatic text analysis and editing might be fun, letting the computer do the editing instead of me. That way it could update the AI knowledge like when a preferred synonym is discovered, without taking human time to do much of it beyond the setup.
Your wikipedia entry shows a webpage and blog that apparently are dead links. Would you be interested in being a team member on my SBIR/STTR grant application(s) for AI and Robots responding to: http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf15505 I've enlisted help in writing the application from Oregon's Small Business Development Center and will meet with an SBIR road trip in August I'm told. (I was also told I need a Ph.D. on my team since I don't have one.)
Kirk Fraser
On Mon, Jul 6, 2015 at 4:19 AM, Ralph Johnson johnson@cs.uiuc.edu wrote:
Here is another possibility.
Take a look at Symbolic Sound, a company that makes a system called Kyma. http://kyma.symbolicsound.com/
This company has been around for over twenty years. Its product has always been the fastest music synthesis system in the world that gives you total control over your sound. And by "total", I mean it gives you the ability to mathematically specify each sound wave. If you want, which is actually too much detail for most people. And it is all written in Smalltalk. Not Squeak, of course, since Squeak wasn't around then. But it could have been done in Squeak. And perhaps they ported it to Squeak. I haven't talked to them for a long time so I don't know what they did, but from the screen shots I think it is still a very old version of VisualWorks.
Anyway, how do they make it so fast? How can they make something that can be used for hours without any GC pauses?
The trick is that the sound is produced on an attached DSP. The GUI is in Smalltalk on a PC, and it generates code for the DSP. It is non-trivial making the compiler so fast that when you press "play", it can immediately start up the DSP and start producing sound. It does this (rather, it did this, since they might have changed the way it works) by just producing enough code to run the DSP for a few seconds and then starting the DSP while it generates the rest of the code. Kyma literally is writing the program into DSP memory at the same time as the DSP is running the program, producing sound.
Anyway, maybe that is the right approach to programming robots. You don't even need to use two computers. Imagine you had two computers, one running Squeak and the other a simple, real-time machine designed for controlling robots, but not very sophisticated. Squeak programs the simple computer, and can change its program dynamically. The simple computer has no gc. Since Squeak is a VM on a computer, the real-time computer can be a VM, too. So, you could be running them both on your PC, or you could run them on two separate computers for better performance.
I would be happy to talk more about this. But I'd like to talk about the beginning of Kyma. The owners of Symbolic Sound are Carla Scaletti and Kurt Hebel. Carla has a PhD in music, and Kurt in Electrical Engineering. I met Carla after she had her PhD. She wanted to get a MS in computer science so she could prove her computer music expertise, and she ended up getting it with me. She took my course on OOP&D that used Smalltalk. For her class project (back in 1987, I think) she wrote a Smalltalk program that ran on the Mac and that produced about ten seconds of sound, but it took several minutes to do it. Hardly real time. However, she was used to using a supercomputer (a Cray?) to generate sounds that still weren't real time, so she was very pleased that she could do it on the Mac at all, and though Smalltalk was slower than Fortran, in her opinion the ease of use was so great that she didn't mind the speed difference. As she put it, the speed difference between a Mac and a Cray was bigger than between Smalltalk and Fortran. She ended up turning this into the first version of Kyma and that became the subject of her MS thesis. I can remember when she showed it in class. She was the only woman in the class, and the other students knew she was a musician, i.e. not *really* a programmer. She was quiet during class, so they had not had a chance to have their prejudices remedied. Her demo at the end of the semester blew them away.
Kurt had built a DSP that their lab used. (The lab was part of the Plato project, I believe, one of the huge number of creative results of this very significant project at Illinois.) It was called the Capybara. This was before the time when you could just buy a good DSP on a chip, but that time came very soon and then they used the commercial chips. For her MS, she converted her system to use the Capybara, and this was when she figured out how to make it start making music within a fraction of a second of pressing the "play" button. Kurt also used Smalltalk with the Capybara. His PhD was about automatically designing digital filters, and his software also generated code for the Capybara, though it was actually quite different from Kyma.
The two of them worked on several different projects over the next few years, but kept improving Kyma. Along the way Kurt started building boards that had several commercial DSPs on them. Eventually they decided to go commercial and started Symbolic Sound.
-Ralph Johnson
On Sun, Jul 5, 2015 at 9:05 PM, Kirk Fraser overcomer.man@gmail.com wrote:
Tim says a multi-core VM is coming for the new Pi.
Are you *sure* that's what Tim said?
Of course my over hopeful misinterpretation is possible.
"Squeak runs quite well on a Pi, especially a pi2 - and we're working on the Cog dynamic translation VM right now, which should with luck triple typical performance." - timrowledge » Thu Feb 19, 2015
https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698...
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I
recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
Thanks. Maybe when I find, make, or build a better place to work, I'll be able to tackle some of that. I wouldn't be surprised if a VM is as easy as a compiler once one actually starts working on it.
On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) pbpublist@gmail.com wrote:
On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote:
I used Cuis at first to display hand written G-Codes in graphic form for a printed circuit board. I kept up with Cuis through a few versions and found a couple of bugs for Juan. Eventually Casey advised going to Squeak so I did. Perhaps my requests were getting annoying.
Perhaps you misinterpreted what Casey said? Definitely have all options (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in particular has a very active mailing lists and you'll find a lot of existing code to play with. I personally do most of my development in Cuis, some in Pharo (for things like Seaside that don't yet exist in Cuis), and a bit still in Squeak. They all have their place depending on your needs. Given your emphasis on performance, I would think that Cuis is going to be the place where you can maximize it. (all the above Smalltalk variants use essentially the same core VM, it's the plugins and images that really differ)
I'm mostly interested in using a multi-core Squeak with GC control for my robot. Tim says a multi-core VM is coming for the new Pi. He hasn't answered on GC control. With muliti-core a user need not see GC control but the system should provide 100% GC free service even if behind the scenes it momentarily toggles one GC off and lets the other complete.
Are you *sure* that's what Tim said? I see a thread where he's talking about *build* performance (i.e. compiling the C code for the VM) on a quad-core with the caveat 'even if Squeak can't directly take advantage' (i.e. no multi-core VM)
With real time driving, which I hope my robot will do some day, getting rid of all 100ms delays is vital.
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
Ben,
Thanks for the FPGA idea and links. I've seen Parallax released their microcontroller design to public so their community can make their own microcontroller FPGA's. Getting Smalltalk code into silicon would be great for a second version of my robot after the prototype works. Or it could be a useful step in dealing with computer vision.
Kirk
On Tue, Jul 14, 2015 at 4:42 AM, Ben Coman btc@openinworld.com wrote:
Smalltalk FPGA may be of interest...
http://www.slideshare.net/esug/luc-fabresse-iwst2014
http://esug.org/data/ESUG2014/IWST/Papers/iwst2014_From%20Smalltalk%20to%20S...
cheers -ben
On Tue, Jul 14, 2015 at 11:55 AM, Kirk Fraser overcomer.man@gmail.com wrote:
Hi Casey,
Thanks for the suggestion. I will have multiple connected controller
boards
and with your suggestion maybe I'll try a Pi for each limb and each
webcam
or maybe your ARM9 suggestion.
To prove it's good as a human in performance I want it to do minor acrobatics like cartwheels and balancing tricks, maybe a Chuck Norris
kick
or jumping over a fence with one hand on a post. Or like those
free-running
videos. Stuff I could not do myself. But it all waits on money. Maybe
I'll
make better progress next year when social security kicks in.
As far as human performance goals, one professor wrote it takes 200
cycles
per second to hop on one leg. Somewhere I read human touch can sense 1
in
32,000 of an inch. I don't have the other figures yet. I may not be
able
to drive an arm as fast as a human boxer - 200 mph but as long as it's
fast
enough to drive a vehicle on a slow road (not I5) that might be enough
until
faster computers are here.
The vision system seems like a major speed bottle neck. Maybe a
mini-cloud
can take one 64th of the image for each processor analyze it, then
assemble
larger object detection in time for the next frame. The DARPA Atlas
robot
used 4 cores for each camera I think. But a mini-cloud set off nearby to process vision and return either the objects with measurements or instructions would be a lot of work. The more I write, the more I see
why
the head of DARPA's robots said they cost 1-2 million as you have to
hire a
team of programmers or make a really intricate learning program.
Kirk
On Mon, Jul 13, 2015 at 5:58 PM, Casey Ransberger <
casey.obrien.r@gmail.com>
wrote:
Hey Kirk,
I like Ralph's suggestion of doing the time/timing specific stuff on a dedicated microcontroller.
I'd recommend going one better: use more than one microcontroller.
Robots
need to do a lot in parallel; if the robot has to stop driving in order
to
think, that's a problem (although the converse would be decidedly
human!)
Anyway, it sounds like real-time is not negotiable in your view, so
green
threads won't cut it either.
Mine has... six controllers in total. That's not counting the ARM9 which is more like a full computer (e.g., Linux.)
I think six anyway. Could be more hiding in there. Two drive sensors, three drive motors, one is wired up close to the ARM board to
coordinate the
other controllers on behalf of what the Linux system wants them doing.
I'm curious, have you figured out what the average, best, and worst case latencies are on human reflexes? In my view, matching or beating that benchmark is where the money probably is.
--C
On Jul 6, 2015, at 12:39 PM, Kirk Fraser overcomer.man@gmail.com
wrote:
Ralph Johnson,
That's an excellent suggestion and an excellent story, thank you very much! Letting the human interface in Smalltalk program the robot
controller
instead of being the robot controller sounds good.
My robot uses a network of Parallax microcontroller chips to drive hydraulic valves, which can be programmed via USB for simple tasks like moving one joint from point A to B but since each controller has 8 cores more complex tasks like grasping or walking can be done on the MCU's or
on a
small Raspberry Pi or other hardware in a non-GC or controllable GC language.
A harder part to wrap my head around is handling the webcam vision
system
and artificial intelligence while remaining time sensitive enough to do
time
critical tasks like cartwheels and other acrobatic choreography.
I know in effect my human mind shuts down most of its intellectual pursuits when engaged in heavy physical activity - maybe the robot must
do
the same - think more creatively when idling and pay closer attention
while
working. That takes care of the Ai timing.
The heavy load of vision processing appears to need a mini-cloud of
cores
to reduce time to identify and measure objects from contours and other information. To guarantee performance they would also need to run a
non-GC
language that could be programmed from Squeak interactively as new
objects
are being learned. I haven't worked with a laser range finder but I
suspect
they use it to narrow the focus onto moving objects to process video in
more
detail in those areas.
The current buzzword "co-robots" meaning robots that work beside or cooperatively with people working in symbiotic relationships with human partners suggests everyone will need a robot friend, which will require
an
artificial intelligence capable of intelligent thought. As most
Americans
are Christian it would make sense for a human compatible AI to be based
on
the Bible. That is what I would love to work on. But that level of
thought
needs a creative CG environment like Squeak at present.
I've been thinking that using a Smalltalk GUI to issue command rules to set an agenda for automatic text analysis and editing might be fun,
letting
the computer do the editing instead of me. That way it could update
the AI
knowledge like when a preferred synonym is discovered, without taking
human
time to do much of it beyond the setup.
Your wikipedia entry shows a webpage and blog that apparently are dead links. Would you be interested in being a team member on my SBIR/STTR
grant
application(s) for AI and Robots responding to: http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf15505 I've enlisted help in writing the application from Oregon's Small Business Development Center and will meet with an SBIR road trip in August I'm
told.
(I was also told I need a Ph.D. on my team since I don't have one.)
Kirk Fraser
On Mon, Jul 6, 2015 at 4:19 AM, Ralph Johnson johnson@cs.uiuc.edu
wrote:
Here is another possibility.
Take a look at Symbolic Sound, a company that makes a system called
Kyma.
http://kyma.symbolicsound.com/
This company has been around for over twenty years. Its product has always been the fastest music synthesis system in the world that gives
you
total control over your sound. And by "total", I mean it gives you the ability to mathematically specify each sound wave. If you want,
which is
actually too much detail for most people. And it is all written in Smalltalk. Not Squeak, of course, since Squeak wasn't around then.
But it
could have been done in Squeak. And perhaps they ported it to
Squeak. I
haven't talked to them for a long time so I don't know what they did,
but
from the screen shots I think it is still a very old version of
VisualWorks.
Anyway, how do they make it so fast? How can they make something that can be used for hours without any GC pauses?
The trick is that the sound is produced on an attached DSP. The GUI
is
in Smalltalk on a PC, and it generates code for the DSP. It is
non-trivial
making the compiler so fast that when you press "play", it can
immediately
start up the DSP and start producing sound. It does this (rather, it
did
this, since they might have changed the way it works) by just producing enough code to run the DSP for a few seconds and then starting the DSP
while
it generates the rest of the code. Kyma literally is writing the
program
into DSP memory at the same time as the DSP is running the program, producing sound.
Anyway, maybe that is the right approach to programming robots. You don't even need to use two computers. Imagine you had two computers,
one
running Squeak and the other a simple, real-time machine designed for controlling robots, but not very sophisticated. Squeak programs the
simple
computer, and can change its program dynamically. The simple computer
has
no gc. Since Squeak is a VM on a computer, the real-time computer
can be a
VM, too. So, you could be running them both on your PC, or you could
run
them on two separate computers for better performance.
I would be happy to talk more about this. But I'd like to talk about
the
beginning of Kyma. The owners of Symbolic Sound are Carla Scaletti
and
Kurt Hebel. Carla has a PhD in music, and Kurt in Electrical
Engineering.
I met Carla after she had her PhD. She wanted to get a MS in computer science so she could prove her computer music expertise, and she ended
up
getting it with me. She took my course on OOP&D that used
Smalltalk. For
her class project (back in 1987, I think) she wrote a Smalltalk
program that
ran on the Mac and that produced about ten seconds of sound, but it
took
several minutes to do it. Hardly real time. However, she was used
to
using a supercomputer (a Cray?) to generate sounds that still weren't
real
time, so she was very pleased that she could do it on the Mac at all,
and
though Smalltalk was slower than Fortran, in her opinion the ease of
use was
so great that she didn't mind the speed difference. As she put it,
the
speed difference between a Mac and a Cray was bigger than between
Smalltalk
and Fortran. She ended up turning this into the first version of Kyma
and
that became the subject of her MS thesis. I can remember when she
showed
it in class. She was the only woman in the class, and the other
students
knew she was a musician, i.e. not *really* a programmer. She was quiet during class, so they had not had a chance to have their prejudices remedied. Her demo at the end of the semester blew them away.
Kurt had built a DSP that their lab used. (The lab was part of the Plato project, I believe, one of the huge number of creative results
of this
very significant project at Illinois.) It was called the Capybara.
This
was before the time when you could just buy a good DSP on a chip, but
that
time came very soon and then they used the commercial chips. For her
MS,
she converted her system to use the Capybara, and this was when she
figured
out how to make it start making music within a fraction of a second of pressing the "play" button. Kurt also used Smalltalk with the
Capybara.
His PhD was about automatically designing digital filters, and his
software
also generated code for the Capybara, though it was actually quite
different
from Kyma.
The two of them worked on several different projects over the next few years, but kept improving Kyma. Along the way Kurt started building
boards
that had several commercial DSPs on them. Eventually they decided to
go
commercial and started Symbolic Sound.
-Ralph Johnson
On Sun, Jul 5, 2015 at 9:05 PM, Kirk Fraser overcomer.man@gmail.com wrote:
> Tim says a multi-core VM is coming for the new Pi.
Are you *sure* that's what Tim said?
Of course my over hopeful misinterpretation is possible.
"Squeak runs quite well on a Pi, especially a pi2 - and we're working
on
the Cog dynamic translation VM right now, which should with luck
triple
typical performance." - timrowledge » Thu Feb 19, 2015
https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698...
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've
done some
highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their
frequency
and duration is very doable) One of the things I think I
recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
Thanks. Maybe when I find, make, or build a better place to work,
I'll
be able to tackle some of that. I wouldn't be surprised if a VM is
as easy
as a compiler once one actually starts working on it.
On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) pbpublist@gmail.com
wrote:
On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote: > I used Cuis at first to display hand written G-Codes in graphic
form
> for a printed circuit board. I kept up with Cuis through a few > versions and found a couple of bugs for Juan. Eventually Casey > advised going to Squeak so I did. Perhaps my requests were getting > annoying. >
Perhaps you misinterpreted what Casey said? Definitely have all options (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in particular has a very active mailing lists and you'll find a lot of existing code to play with. I personally do most of my development
in
Cuis, some in Pharo (for things like Seaside that don't yet exist in Cuis), and a bit still in Squeak. They all have their place
depending
on your needs. Given your emphasis on performance, I would think
that
Cuis is going to be the place where you can maximize it. (all the
above
Smalltalk variants use essentially the same core VM, it's the plugins and images that really differ)
> I'm mostly interested in using a multi-core Squeak with GC control > for > my robot. Tim says a multi-core VM is coming for the new Pi. He > hasn't answered on GC control. With muliti-core a user need not
see
> GC control but the system should provide 100% GC free service even
if
> behind the scenes it momentarily toggles one GC off and lets the > other > complete. >
Are you *sure* that's what Tim said? I see a thread where he's
talking
about *build* performance (i.e. compiling the C code for the VM) on a quad-core with the caveat 'even if Squeak can't directly take advantage' (i.e. no multi-core VM)
> > With real time driving, which I hope my robot will do some day, > getting rid of all 100ms delays is vital. >
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's
is
impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory
pinning
if you're calling out to external code.
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
Wow! Ben's article and slides shows Ralph Johnson's suggestion several steps closer to real life and on computer vision for robots.
I'm still thinking a laser range finder is not necessary since humans don't have them. We get by with eyes that can measure distances with or without stereo enhancement and the stereo image facility helps us see things neither eye can make out alone.
On Tue, Jul 14, 2015 at 5:30 AM, Kirk Fraser overcomer.man@gmail.com wrote:
Ben,
Thanks for the FPGA idea and links. I've seen Parallax released their microcontroller design to public so their community can make their own microcontroller FPGA's. Getting Smalltalk code into silicon would be great for a second version of my robot after the prototype works. Or it could be a useful step in dealing with computer vision.
Kirk
On Tue, Jul 14, 2015 at 4:42 AM, Ben Coman btc@openinworld.com wrote:
Smalltalk FPGA may be of interest...
http://www.slideshare.net/esug/luc-fabresse-iwst2014
http://esug.org/data/ESUG2014/IWST/Papers/iwst2014_From%20Smalltalk%20to%20S...
cheers -ben
On Tue, Jul 14, 2015 at 11:55 AM, Kirk Fraser overcomer.man@gmail.com wrote:
Hi Casey,
Thanks for the suggestion. I will have multiple connected controller
boards
and with your suggestion maybe I'll try a Pi for each limb and each
webcam
or maybe your ARM9 suggestion.
To prove it's good as a human in performance I want it to do minor acrobatics like cartwheels and balancing tricks, maybe a Chuck Norris
kick
or jumping over a fence with one hand on a post. Or like those
free-running
videos. Stuff I could not do myself. But it all waits on money. Maybe
I'll
make better progress next year when social security kicks in.
As far as human performance goals, one professor wrote it takes 200
cycles
per second to hop on one leg. Somewhere I read human touch can sense 1
in
32,000 of an inch. I don't have the other figures yet. I may not be
able
to drive an arm as fast as a human boxer - 200 mph but as long as it's
fast
enough to drive a vehicle on a slow road (not I5) that might be enough
until
faster computers are here.
The vision system seems like a major speed bottle neck. Maybe a
mini-cloud
can take one 64th of the image for each processor analyze it, then
assemble
larger object detection in time for the next frame. The DARPA Atlas
robot
used 4 cores for each camera I think. But a mini-cloud set off nearby
to
process vision and return either the objects with measurements or instructions would be a lot of work. The more I write, the more I see
why
the head of DARPA's robots said they cost 1-2 million as you have to
hire a
team of programmers or make a really intricate learning program.
Kirk
On Mon, Jul 13, 2015 at 5:58 PM, Casey Ransberger <
casey.obrien.r@gmail.com>
wrote:
Hey Kirk,
I like Ralph's suggestion of doing the time/timing specific stuff on a dedicated microcontroller.
I'd recommend going one better: use more than one microcontroller.
Robots
need to do a lot in parallel; if the robot has to stop driving in
order to
think, that's a problem (although the converse would be decidedly
human!)
Anyway, it sounds like real-time is not negotiable in your view, so
green
threads won't cut it either.
Mine has... six controllers in total. That's not counting the ARM9
which
is more like a full computer (e.g., Linux.)
I think six anyway. Could be more hiding in there. Two drive sensors, three drive motors, one is wired up close to the ARM board to
coordinate the
other controllers on behalf of what the Linux system wants them doing.
I'm curious, have you figured out what the average, best, and worst
case
latencies are on human reflexes? In my view, matching or beating that benchmark is where the money probably is.
--C
On Jul 6, 2015, at 12:39 PM, Kirk Fraser overcomer.man@gmail.com
wrote:
Ralph Johnson,
That's an excellent suggestion and an excellent story, thank you very much! Letting the human interface in Smalltalk program the robot
controller
instead of being the robot controller sounds good.
My robot uses a network of Parallax microcontroller chips to drive hydraulic valves, which can be programmed via USB for simple tasks like moving one joint from point A to B but since each controller has 8
cores
more complex tasks like grasping or walking can be done on the MCU's
or on a
small Raspberry Pi or other hardware in a non-GC or controllable GC language.
A harder part to wrap my head around is handling the webcam vision
system
and artificial intelligence while remaining time sensitive enough to
do time
critical tasks like cartwheels and other acrobatic choreography.
I know in effect my human mind shuts down most of its intellectual pursuits when engaged in heavy physical activity - maybe the robot
must do
the same - think more creatively when idling and pay closer attention
while
working. That takes care of the Ai timing.
The heavy load of vision processing appears to need a mini-cloud of
cores
to reduce time to identify and measure objects from contours and other information. To guarantee performance they would also need to run a
non-GC
language that could be programmed from Squeak interactively as new
objects
are being learned. I haven't worked with a laser range finder but I
suspect
they use it to narrow the focus onto moving objects to process video
in more
detail in those areas.
The current buzzword "co-robots" meaning robots that work beside or cooperatively with people working in symbiotic relationships with human partners suggests everyone will need a robot friend, which will
require an
artificial intelligence capable of intelligent thought. As most
Americans
are Christian it would make sense for a human compatible AI to be
based on
the Bible. That is what I would love to work on. But that level of
thought
needs a creative CG environment like Squeak at present.
I've been thinking that using a Smalltalk GUI to issue command rules to set an agenda for automatic text analysis and editing might be fun,
letting
the computer do the editing instead of me. That way it could update
the AI
knowledge like when a preferred synonym is discovered, without taking
human
time to do much of it beyond the setup.
Your wikipedia entry shows a webpage and blog that apparently are dead links. Would you be interested in being a team member on my SBIR/STTR
grant
application(s) for AI and Robots responding to: http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf15505 I've enlisted help in writing the application from Oregon's Small Business Development Center and will meet with an SBIR road trip in August I'm
told.
(I was also told I need a Ph.D. on my team since I don't have one.)
Kirk Fraser
On Mon, Jul 6, 2015 at 4:19 AM, Ralph Johnson johnson@cs.uiuc.edu
wrote:
Here is another possibility.
Take a look at Symbolic Sound, a company that makes a system called
Kyma.
http://kyma.symbolicsound.com/
This company has been around for over twenty years. Its product has always been the fastest music synthesis system in the world that
gives you
total control over your sound. And by "total", I mean it gives you
the
ability to mathematically specify each sound wave. If you want,
which is
actually too much detail for most people. And it is all written in Smalltalk. Not Squeak, of course, since Squeak wasn't around then.
But it
could have been done in Squeak. And perhaps they ported it to
Squeak. I
haven't talked to them for a long time so I don't know what they did,
but
from the screen shots I think it is still a very old version of
VisualWorks.
Anyway, how do they make it so fast? How can they make something that can be used for hours without any GC pauses?
The trick is that the sound is produced on an attached DSP. The GUI
is
in Smalltalk on a PC, and it generates code for the DSP. It is
non-trivial
making the compiler so fast that when you press "play", it can
immediately
start up the DSP and start producing sound. It does this (rather, it
did
this, since they might have changed the way it works) by just
producing
enough code to run the DSP for a few seconds and then starting the
DSP while
it generates the rest of the code. Kyma literally is writing the
program
into DSP memory at the same time as the DSP is running the program, producing sound.
Anyway, maybe that is the right approach to programming robots. You don't even need to use two computers. Imagine you had two
computers, one
running Squeak and the other a simple, real-time machine designed for controlling robots, but not very sophisticated. Squeak programs the
simple
computer, and can change its program dynamically. The simple
computer has
no gc. Since Squeak is a VM on a computer, the real-time computer
can be a
VM, too. So, you could be running them both on your PC, or you could
run
them on two separate computers for better performance.
I would be happy to talk more about this. But I'd like to talk about
the
beginning of Kyma. The owners of Symbolic Sound are Carla Scaletti
and
Kurt Hebel. Carla has a PhD in music, and Kurt in Electrical
Engineering.
I met Carla after she had her PhD. She wanted to get a MS in computer science so she could prove her computer music expertise, and she
ended up
getting it with me. She took my course on OOP&D that used
Smalltalk. For
her class project (back in 1987, I think) she wrote a Smalltalk
program that
ran on the Mac and that produced about ten seconds of sound, but it
took
several minutes to do it. Hardly real time. However, she was used
to
using a supercomputer (a Cray?) to generate sounds that still weren't
real
time, so she was very pleased that she could do it on the Mac at all,
and
though Smalltalk was slower than Fortran, in her opinion the ease of
use was
so great that she didn't mind the speed difference. As she put it,
the
speed difference between a Mac and a Cray was bigger than between
Smalltalk
and Fortran. She ended up turning this into the first version of
Kyma and
that became the subject of her MS thesis. I can remember when she
showed
it in class. She was the only woman in the class, and the other
students
knew she was a musician, i.e. not *really* a programmer. She was
quiet
during class, so they had not had a chance to have their prejudices remedied. Her demo at the end of the semester blew them away.
Kurt had built a DSP that their lab used. (The lab was part of the Plato project, I believe, one of the huge number of creative results
of this
very significant project at Illinois.) It was called the Capybara.
This
was before the time when you could just buy a good DSP on a chip, but
that
time came very soon and then they used the commercial chips. For her
MS,
she converted her system to use the Capybara, and this was when she
figured
out how to make it start making music within a fraction of a second of pressing the "play" button. Kurt also used Smalltalk with the
Capybara.
His PhD was about automatically designing digital filters, and his
software
also generated code for the Capybara, though it was actually quite
different
from Kyma.
The two of them worked on several different projects over the next few years, but kept improving Kyma. Along the way Kurt started building
boards
that had several commercial DSPs on them. Eventually they decided to
go
commercial and started Symbolic Sound.
-Ralph Johnson
On Sun, Jul 5, 2015 at 9:05 PM, Kirk Fraser overcomer.man@gmail.com wrote:
>> Tim says a multi-core VM is coming for the new Pi.
> Are you *sure* that's what Tim said?
Of course my over hopeful misinterpretation is possible.
"Squeak runs quite well on a Pi, especially a pi2 - and we're
working on
the Cog dynamic translation VM right now, which should with luck
triple
typical performance." - timrowledge » Thu Feb 19, 2015
https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698...
> The trick to getting rid of long delays is more a function of > preallocating everything you can than getting rid of GC's (I've
done some
> highly interactive stuff in GC environments and preventing GC's is > impractical except over short periods of time, minimizing their
frequency
> and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory
pinning
if you're calling out to external code.
Thanks. Maybe when I find, make, or build a better place to work,
I'll
be able to tackle some of that. I wouldn't be surprised if a VM is
as easy
as a compiler once one actually starts working on it.
On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) pbpublist@gmail.com
wrote:
> > On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote: > > I used Cuis at first to display hand written G-Codes in graphic
form
> > for a printed circuit board. I kept up with Cuis through a few > > versions and found a couple of bugs for Juan. Eventually Casey > > advised going to Squeak so I did. Perhaps my requests were getting > > annoying. > > > > Perhaps you misinterpreted what Casey said? Definitely have all > options > (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in > particular has a very active mailing lists and you'll find a lot of > existing code to play with. I personally do most of my development
in
> Cuis, some in Pharo (for things like Seaside that don't yet exist in > Cuis), and a bit still in Squeak. They all have their place
depending
> on your needs. Given your emphasis on performance, I would think
that
> Cuis is going to be the place where you can maximize it. (all the
above
> Smalltalk variants use essentially the same core VM, it's the
plugins
> and images that really differ) > > > I'm mostly interested in using a multi-core Squeak with GC control > > for > > my robot. Tim says a multi-core VM is coming for the new Pi. He > > hasn't answered on GC control. With muliti-core a user need not
see
> > GC control but the system should provide 100% GC free service
even if
> > behind the scenes it momentarily toggles one GC off and lets the > > other > > complete. > > > > Are you *sure* that's what Tim said? I see a thread where he's
talking
> about *build* performance (i.e. compiling the C code for the VM) on
a
> quad-core with the caveat 'even if Squeak can't directly take > advantage' (i.e. no multi-core VM) > > > > > With real time driving, which I hope my robot will do some day, > > getting rid of all 100ms delays is vital. > > > > The trick to getting rid of long delays is more a function of > preallocating everything you can than getting rid of GC's (I've done > some highly interactive stuff in GC environments and preventing
GC's is
> impractical except over short periods of time, minimizing their > frequency and duration is very doable) One of the things I think I > recently saw that should help you in this regard is FFI memory
pinning
> if you're calling out to external code. >
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
On Wed, Jul 15, 2015 at 1:28 AM, Kirk Fraser overcomer.man@gmail.com wrote:
Wow! Ben's article and slides shows Ralph Johnson's suggestion several steps closer to real life and on computer vision for robots.
I'm still thinking a laser range finder is not necessary since humans don't have them. We get by with eyes that can measure distances with or without stereo enhancement and the stereo image facility helps us see things neither eye can make out alone.
OT: That sounds a bit purist. Just because we haven't evolved laser vision (yet) doesn't mean its not useful. If it needs less computation than stereo vision then I'd say use the tools you've got :)
cheers -ben :)-|--<
And another option might be using the Programmable Realtime Unit of the Beaglebone Black. For example, tight loop toggling of an LED is 200ns on: * 1Ghz Arm Cortex-A8 = 200ns * 200Mhz PRU = 5ns
ELC 2015 - Enhancing Real-Time Capabilities with the PRU - Rob Birkett, Texas Instruments http://events.linuxfoundation.org/sites/events/files/slides/Enhancing%20RT%2... https://www.youtube.com/watch?v=plCYsbmMbmY
cheers -ben
On Tue, Jul 14, 2015 at 7:42 PM, Ben Coman btc@openinworld.com wrote:
Smalltalk FPGA may be of interest...
http://www.slideshare.net/esug/luc-fabresse-iwst2014
http://esug.org/data/ESUG2014/IWST/Papers/iwst2014_From%20Smalltalk%20to%20S...
cheers -ben
On Tue, Jul 14, 2015 at 11:55 AM, Kirk Fraser overcomer.man@gmail.com wrote:
Hi Casey,
Thanks for the suggestion. I will have multiple connected controller boards and with your suggestion maybe I'll try a Pi for each limb and each webcam or maybe your ARM9 suggestion.
To prove it's good as a human in performance I want it to do minor acrobatics like cartwheels and balancing tricks, maybe a Chuck Norris kick or jumping over a fence with one hand on a post. Or like those free-running videos. Stuff I could not do myself. But it all waits on money. Maybe I'll make better progress next year when social security kicks in.
As far as human performance goals, one professor wrote it takes 200 cycles per second to hop on one leg. Somewhere I read human touch can sense 1 in 32,000 of an inch. I don't have the other figures yet. I may not be able to drive an arm as fast as a human boxer - 200 mph but as long as it's fast enough to drive a vehicle on a slow road (not I5) that might be enough until faster computers are here.
The vision system seems like a major speed bottle neck. Maybe a mini-cloud can take one 64th of the image for each processor analyze it, then assemble larger object detection in time for the next frame. The DARPA Atlas robot used 4 cores for each camera I think. But a mini-cloud set off nearby to process vision and return either the objects with measurements or instructions would be a lot of work. The more I write, the more I see why the head of DARPA's robots said they cost 1-2 million as you have to hire a team of programmers or make a really intricate learning program.
Kirk
On Mon, Jul 13, 2015 at 5:58 PM, Casey Ransberger casey.obrien.r@gmail.com wrote:
Hey Kirk,
I like Ralph's suggestion of doing the time/timing specific stuff on a dedicated microcontroller.
I'd recommend going one better: use more than one microcontroller. Robots need to do a lot in parallel; if the robot has to stop driving in order to think, that's a problem (although the converse would be decidedly human!) Anyway, it sounds like real-time is not negotiable in your view, so green threads won't cut it either.
Mine has... six controllers in total. That's not counting the ARM9 which is more like a full computer (e.g., Linux.)
I think six anyway. Could be more hiding in there. Two drive sensors, three drive motors, one is wired up close to the ARM board to coordinate the other controllers on behalf of what the Linux system wants them doing.
I'm curious, have you figured out what the average, best, and worst case latencies are on human reflexes? In my view, matching or beating that benchmark is where the money probably is.
--C
On Jul 6, 2015, at 12:39 PM, Kirk Fraser overcomer.man@gmail.com wrote:
Ralph Johnson,
That's an excellent suggestion and an excellent story, thank you very much! Letting the human interface in Smalltalk program the robot controller instead of being the robot controller sounds good.
My robot uses a network of Parallax microcontroller chips to drive hydraulic valves, which can be programmed via USB for simple tasks like moving one joint from point A to B but since each controller has 8 cores more complex tasks like grasping or walking can be done on the MCU's or on a small Raspberry Pi or other hardware in a non-GC or controllable GC language.
A harder part to wrap my head around is handling the webcam vision system and artificial intelligence while remaining time sensitive enough to do time critical tasks like cartwheels and other acrobatic choreography.
I know in effect my human mind shuts down most of its intellectual pursuits when engaged in heavy physical activity - maybe the robot must do the same - think more creatively when idling and pay closer attention while working. That takes care of the Ai timing.
The heavy load of vision processing appears to need a mini-cloud of cores to reduce time to identify and measure objects from contours and other information. To guarantee performance they would also need to run a non-GC language that could be programmed from Squeak interactively as new objects are being learned. I haven't worked with a laser range finder but I suspect they use it to narrow the focus onto moving objects to process video in more detail in those areas.
The current buzzword "co-robots" meaning robots that work beside or cooperatively with people working in symbiotic relationships with human partners suggests everyone will need a robot friend, which will require an artificial intelligence capable of intelligent thought. As most Americans are Christian it would make sense for a human compatible AI to be based on the Bible. That is what I would love to work on. But that level of thought needs a creative CG environment like Squeak at present.
I've been thinking that using a Smalltalk GUI to issue command rules to set an agenda for automatic text analysis and editing might be fun, letting the computer do the editing instead of me. That way it could update the AI knowledge like when a preferred synonym is discovered, without taking human time to do much of it beyond the setup.
Your wikipedia entry shows a webpage and blog that apparently are dead links. Would you be interested in being a team member on my SBIR/STTR grant application(s) for AI and Robots responding to: http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf15505 I've enlisted help in writing the application from Oregon's Small Business Development Center and will meet with an SBIR road trip in August I'm told. (I was also told I need a Ph.D. on my team since I don't have one.)
Kirk Fraser
On Mon, Jul 6, 2015 at 4:19 AM, Ralph Johnson johnson@cs.uiuc.edu wrote:
Here is another possibility.
Take a look at Symbolic Sound, a company that makes a system called Kyma. http://kyma.symbolicsound.com/
This company has been around for over twenty years. Its product has always been the fastest music synthesis system in the world that gives you total control over your sound. And by "total", I mean it gives you the ability to mathematically specify each sound wave. If you want, which is actually too much detail for most people. And it is all written in Smalltalk. Not Squeak, of course, since Squeak wasn't around then. But it could have been done in Squeak. And perhaps they ported it to Squeak. I haven't talked to them for a long time so I don't know what they did, but from the screen shots I think it is still a very old version of VisualWorks.
Anyway, how do they make it so fast? How can they make something that can be used for hours without any GC pauses?
The trick is that the sound is produced on an attached DSP. The GUI is in Smalltalk on a PC, and it generates code for the DSP. It is non-trivial making the compiler so fast that when you press "play", it can immediately start up the DSP and start producing sound. It does this (rather, it did this, since they might have changed the way it works) by just producing enough code to run the DSP for a few seconds and then starting the DSP while it generates the rest of the code. Kyma literally is writing the program into DSP memory at the same time as the DSP is running the program, producing sound.
Anyway, maybe that is the right approach to programming robots. You don't even need to use two computers. Imagine you had two computers, one running Squeak and the other a simple, real-time machine designed for controlling robots, but not very sophisticated. Squeak programs the simple computer, and can change its program dynamically. The simple computer has no gc. Since Squeak is a VM on a computer, the real-time computer can be a VM, too. So, you could be running them both on your PC, or you could run them on two separate computers for better performance.
I would be happy to talk more about this. But I'd like to talk about the beginning of Kyma. The owners of Symbolic Sound are Carla Scaletti and Kurt Hebel. Carla has a PhD in music, and Kurt in Electrical Engineering. I met Carla after she had her PhD. She wanted to get a MS in computer science so she could prove her computer music expertise, and she ended up getting it with me. She took my course on OOP&D that used Smalltalk. For her class project (back in 1987, I think) she wrote a Smalltalk program that ran on the Mac and that produced about ten seconds of sound, but it took several minutes to do it. Hardly real time. However, she was used to using a supercomputer (a Cray?) to generate sounds that still weren't real time, so she was very pleased that she could do it on the Mac at all, and though Smalltalk was slower than Fortran, in her opinion the ease of use was so great that she didn't mind the speed difference. As she put it, the speed difference between a Mac and a Cray was bigger than between Smalltalk and Fortran. She ended up turning this into the first version of Kyma and that became the subject of her MS thesis. I can remember when she showed it in class. She was the only woman in the class, and the other students knew she was a musician, i.e. not *really* a programmer. She was quiet during class, so they had not had a chance to have their prejudices remedied. Her demo at the end of the semester blew them away.
Kurt had built a DSP that their lab used. (The lab was part of the Plato project, I believe, one of the huge number of creative results of this very significant project at Illinois.) It was called the Capybara. This was before the time when you could just buy a good DSP on a chip, but that time came very soon and then they used the commercial chips. For her MS, she converted her system to use the Capybara, and this was when she figured out how to make it start making music within a fraction of a second of pressing the "play" button. Kurt also used Smalltalk with the Capybara. His PhD was about automatically designing digital filters, and his software also generated code for the Capybara, though it was actually quite different from Kyma.
The two of them worked on several different projects over the next few years, but kept improving Kyma. Along the way Kurt started building boards that had several commercial DSPs on them. Eventually they decided to go commercial and started Symbolic Sound.
-Ralph Johnson
On Sun, Jul 5, 2015 at 9:05 PM, Kirk Fraser overcomer.man@gmail.com wrote:
> Tim says a multi-core VM is coming for the new Pi.
Are you *sure* that's what Tim said?
Of course my over hopeful misinterpretation is possible.
"Squeak runs quite well on a Pi, especially a pi2 - and we're working on the Cog dynamic translation VM right now, which should with luck triple typical performance." - timrowledge » Thu Feb 19, 2015
https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698...
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I
recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
Thanks. Maybe when I find, make, or build a better place to work, I'll be able to tackle some of that. I wouldn't be surprised if a VM is as easy as a compiler once one actually starts working on it.
On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) pbpublist@gmail.com wrote:
On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote: > I used Cuis at first to display hand written G-Codes in graphic form > for a printed circuit board. I kept up with Cuis through a few > versions and found a couple of bugs for Juan. Eventually Casey > advised going to Squeak so I did. Perhaps my requests were getting > annoying. >
Perhaps you misinterpreted what Casey said? Definitely have all options (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in particular has a very active mailing lists and you'll find a lot of existing code to play with. I personally do most of my development in Cuis, some in Pharo (for things like Seaside that don't yet exist in Cuis), and a bit still in Squeak. They all have their place depending on your needs. Given your emphasis on performance, I would think that Cuis is going to be the place where you can maximize it. (all the above Smalltalk variants use essentially the same core VM, it's the plugins and images that really differ)
> I'm mostly interested in using a multi-core Squeak with GC control > for > my robot. Tim says a multi-core VM is coming for the new Pi. He > hasn't answered on GC control. With muliti-core a user need not see > GC control but the system should provide 100% GC free service even if > behind the scenes it momentarily toggles one GC off and lets the > other > complete. >
Are you *sure* that's what Tim said? I see a thread where he's talking about *build* performance (i.e. compiling the C code for the VM) on a quad-core with the caveat 'even if Squeak can't directly take advantage' (i.e. no multi-core VM)
> > With real time driving, which I hope my robot will do some day, > getting rid of all 100ms delays is vital. >
The trick to getting rid of long delays is more a function of preallocating everything you can than getting rid of GC's (I've done some highly interactive stuff in GC environments and preventing GC's is impractical except over short periods of time, minimizing their frequency and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory pinning if you're calling out to external code.
Thanks Ben and all. I'll keep these posts.
On Thu, Jul 16, 2015 at 10:10 AM, Ben Coman btc@openinworld.com wrote:
And another option might be using the Programmable Realtime Unit of the Beaglebone Black. For example, tight loop toggling of an LED is 200ns on:
- 1Ghz Arm Cortex-A8 = 200ns
- 200Mhz PRU = 5ns
ELC 2015 - Enhancing Real-Time Capabilities with the PRU - Rob Birkett, Texas Instruments
http://events.linuxfoundation.org/sites/events/files/slides/Enhancing%20RT%2... https://www.youtube.com/watch?v=plCYsbmMbmY
cheers -ben
On Tue, Jul 14, 2015 at 7:42 PM, Ben Coman btc@openinworld.com wrote:
Smalltalk FPGA may be of interest...
http://esug.org/data/ESUG2014/IWST/Papers/iwst2014_From%20Smalltalk%20to%20S...
cheers -ben
On Tue, Jul 14, 2015 at 11:55 AM, Kirk Fraser overcomer.man@gmail.com
wrote:
Hi Casey,
Thanks for the suggestion. I will have multiple connected controller
boards
and with your suggestion maybe I'll try a Pi for each limb and each
webcam
or maybe your ARM9 suggestion.
To prove it's good as a human in performance I want it to do minor acrobatics like cartwheels and balancing tricks, maybe a Chuck Norris
kick
or jumping over a fence with one hand on a post. Or like those
free-running
videos. Stuff I could not do myself. But it all waits on money. Maybe
I'll
make better progress next year when social security kicks in.
As far as human performance goals, one professor wrote it takes 200
cycles
per second to hop on one leg. Somewhere I read human touch can sense 1
in
32,000 of an inch. I don't have the other figures yet. I may not be
able
to drive an arm as fast as a human boxer - 200 mph but as long as it's
fast
enough to drive a vehicle on a slow road (not I5) that might be enough
until
faster computers are here.
The vision system seems like a major speed bottle neck. Maybe a
mini-cloud
can take one 64th of the image for each processor analyze it, then
assemble
larger object detection in time for the next frame. The DARPA Atlas
robot
used 4 cores for each camera I think. But a mini-cloud set off nearby
to
process vision and return either the objects with measurements or instructions would be a lot of work. The more I write, the more I see
why
the head of DARPA's robots said they cost 1-2 million as you have to
hire a
team of programmers or make a really intricate learning program.
Kirk
On Mon, Jul 13, 2015 at 5:58 PM, Casey Ransberger <
casey.obrien.r@gmail.com>
wrote:
Hey Kirk,
I like Ralph's suggestion of doing the time/timing specific stuff on a dedicated microcontroller.
I'd recommend going one better: use more than one microcontroller.
Robots
need to do a lot in parallel; if the robot has to stop driving in
order to
think, that's a problem (although the converse would be decidedly
human!)
Anyway, it sounds like real-time is not negotiable in your view, so
green
threads won't cut it either.
Mine has... six controllers in total. That's not counting the ARM9
which
is more like a full computer (e.g., Linux.)
I think six anyway. Could be more hiding in there. Two drive sensors, three drive motors, one is wired up close to the ARM board to
coordinate the
other controllers on behalf of what the Linux system wants them doing.
I'm curious, have you figured out what the average, best, and worst
case
latencies are on human reflexes? In my view, matching or beating that benchmark is where the money probably is.
--C
On Jul 6, 2015, at 12:39 PM, Kirk Fraser overcomer.man@gmail.com
wrote:
Ralph Johnson,
That's an excellent suggestion and an excellent story, thank you very much! Letting the human interface in Smalltalk program the robot
controller
instead of being the robot controller sounds good.
My robot uses a network of Parallax microcontroller chips to drive hydraulic valves, which can be programmed via USB for simple tasks like moving one joint from point A to B but since each controller has 8
cores
more complex tasks like grasping or walking can be done on the MCU's
or on a
small Raspberry Pi or other hardware in a non-GC or controllable GC language.
A harder part to wrap my head around is handling the webcam vision
system
and artificial intelligence while remaining time sensitive enough to
do time
critical tasks like cartwheels and other acrobatic choreography.
I know in effect my human mind shuts down most of its intellectual pursuits when engaged in heavy physical activity - maybe the robot
must do
the same - think more creatively when idling and pay closer attention
while
working. That takes care of the Ai timing.
The heavy load of vision processing appears to need a mini-cloud of
cores
to reduce time to identify and measure objects from contours and other information. To guarantee performance they would also need to run a
non-GC
language that could be programmed from Squeak interactively as new
objects
are being learned. I haven't worked with a laser range finder but I
suspect
they use it to narrow the focus onto moving objects to process video
in more
detail in those areas.
The current buzzword "co-robots" meaning robots that work beside or cooperatively with people working in symbiotic relationships with human partners suggests everyone will need a robot friend, which will
require an
artificial intelligence capable of intelligent thought. As most
Americans
are Christian it would make sense for a human compatible AI to be
based on
the Bible. That is what I would love to work on. But that level of
thought
needs a creative CG environment like Squeak at present.
I've been thinking that using a Smalltalk GUI to issue command rules to set an agenda for automatic text analysis and editing might be fun,
letting
the computer do the editing instead of me. That way it could update
the AI
knowledge like when a preferred synonym is discovered, without taking
human
time to do much of it beyond the setup.
Your wikipedia entry shows a webpage and blog that apparently are dead links. Would you be interested in being a team member on my SBIR/STTR
grant
application(s) for AI and Robots responding to: http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf15505 I've enlisted help in writing the application from Oregon's Small Business Development Center and will meet with an SBIR road trip in August I'm
told.
(I was also told I need a Ph.D. on my team since I don't have one.)
Kirk Fraser
On Mon, Jul 6, 2015 at 4:19 AM, Ralph Johnson johnson@cs.uiuc.edu
wrote:
Here is another possibility.
Take a look at Symbolic Sound, a company that makes a system called
Kyma.
http://kyma.symbolicsound.com/
This company has been around for over twenty years. Its product has always been the fastest music synthesis system in the world that
gives you
total control over your sound. And by "total", I mean it gives you
the
ability to mathematically specify each sound wave. If you want,
which is
actually too much detail for most people. And it is all written in Smalltalk. Not Squeak, of course, since Squeak wasn't around then.
But it
could have been done in Squeak. And perhaps they ported it to
Squeak. I
haven't talked to them for a long time so I don't know what they did,
but
from the screen shots I think it is still a very old version of
VisualWorks.
Anyway, how do they make it so fast? How can they make something that can be used for hours without any GC pauses?
The trick is that the sound is produced on an attached DSP. The GUI
is
in Smalltalk on a PC, and it generates code for the DSP. It is
non-trivial
making the compiler so fast that when you press "play", it can
immediately
start up the DSP and start producing sound. It does this (rather, it
did
this, since they might have changed the way it works) by just
producing
enough code to run the DSP for a few seconds and then starting the
DSP while
it generates the rest of the code. Kyma literally is writing the
program
into DSP memory at the same time as the DSP is running the program, producing sound.
Anyway, maybe that is the right approach to programming robots. You don't even need to use two computers. Imagine you had two
computers, one
running Squeak and the other a simple, real-time machine designed for controlling robots, but not very sophisticated. Squeak programs the
simple
computer, and can change its program dynamically. The simple
computer has
no gc. Since Squeak is a VM on a computer, the real-time computer
can be a
VM, too. So, you could be running them both on your PC, or you could
run
them on two separate computers for better performance.
I would be happy to talk more about this. But I'd like to talk about
the
beginning of Kyma. The owners of Symbolic Sound are Carla Scaletti
and
Kurt Hebel. Carla has a PhD in music, and Kurt in Electrical
Engineering.
I met Carla after she had her PhD. She wanted to get a MS in computer science so she could prove her computer music expertise, and she
ended up
getting it with me. She took my course on OOP&D that used
Smalltalk. For
her class project (back in 1987, I think) she wrote a Smalltalk
program that
ran on the Mac and that produced about ten seconds of sound, but it
took
several minutes to do it. Hardly real time. However, she was used
to
using a supercomputer (a Cray?) to generate sounds that still weren't
real
time, so she was very pleased that she could do it on the Mac at all,
and
though Smalltalk was slower than Fortran, in her opinion the ease of
use was
so great that she didn't mind the speed difference. As she put it,
the
speed difference between a Mac and a Cray was bigger than between
Smalltalk
and Fortran. She ended up turning this into the first version of
Kyma and
that became the subject of her MS thesis. I can remember when she
showed
it in class. She was the only woman in the class, and the other
students
knew she was a musician, i.e. not *really* a programmer. She was
quiet
during class, so they had not had a chance to have their prejudices remedied. Her demo at the end of the semester blew them away.
Kurt had built a DSP that their lab used. (The lab was part of the Plato project, I believe, one of the huge number of creative results
of this
very significant project at Illinois.) It was called the Capybara.
This
was before the time when you could just buy a good DSP on a chip, but
that
time came very soon and then they used the commercial chips. For her
MS,
she converted her system to use the Capybara, and this was when she
figured
out how to make it start making music within a fraction of a second of pressing the "play" button. Kurt also used Smalltalk with the
Capybara.
His PhD was about automatically designing digital filters, and his
software
also generated code for the Capybara, though it was actually quite
different
from Kyma.
The two of them worked on several different projects over the next few years, but kept improving Kyma. Along the way Kurt started building
boards
that had several commercial DSPs on them. Eventually they decided to
go
commercial and started Symbolic Sound.
-Ralph Johnson
On Sun, Jul 5, 2015 at 9:05 PM, Kirk Fraser overcomer.man@gmail.com wrote:
>> Tim says a multi-core VM is coming for the new Pi.
> Are you *sure* that's what Tim said?
Of course my over hopeful misinterpretation is possible.
"Squeak runs quite well on a Pi, especially a pi2 - and we're
working on
the Cog dynamic translation VM right now, which should with luck
triple
typical performance." - timrowledge » Thu Feb 19, 2015
https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698...
> The trick to getting rid of long delays is more a function of > preallocating everything you can than getting rid of GC's (I've
done some
> highly interactive stuff in GC environments and preventing GC's is > impractical except over short periods of time, minimizing their
frequency
> and duration is very doable) One of the things I think I recently saw that should help you in this regard is FFI memory
pinning
if you're calling out to external code.
Thanks. Maybe when I find, make, or build a better place to work,
I'll
be able to tackle some of that. I wouldn't be surprised if a VM is
as easy
as a compiler once one actually starts working on it.
On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) pbpublist@gmail.com
wrote:
> > On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote: > > I used Cuis at first to display hand written G-Codes in graphic
form
> > for a printed circuit board. I kept up with Cuis through a few > > versions and found a couple of bugs for Juan. Eventually Casey > > advised going to Squeak so I did. Perhaps my requests were getting > > annoying. > > > > Perhaps you misinterpreted what Casey said? Definitely have all > options > (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in > particular has a very active mailing lists and you'll find a lot of > existing code to play with. I personally do most of my development
in
> Cuis, some in Pharo (for things like Seaside that don't yet exist in > Cuis), and a bit still in Squeak. They all have their place
depending
> on your needs. Given your emphasis on performance, I would think
that
> Cuis is going to be the place where you can maximize it. (all the
above
> Smalltalk variants use essentially the same core VM, it's the
plugins
> and images that really differ) > > > I'm mostly interested in using a multi-core Squeak with GC control > > for > > my robot. Tim says a multi-core VM is coming for the new Pi. He > > hasn't answered on GC control. With muliti-core a user need not
see
> > GC control but the system should provide 100% GC free service
even if
> > behind the scenes it momentarily toggles one GC off and lets the > > other > > complete. > > > > Are you *sure* that's what Tim said? I see a thread where he's
talking
> about *build* performance (i.e. compiling the C code for the VM) on
a
> quad-core with the caveat 'even if Squeak can't directly take > advantage' (i.e. no multi-core VM) > > > > > With real time driving, which I hope my robot will do some day, > > getting rid of all 100ms delays is vital. > > > > The trick to getting rid of long delays is more a function of > preallocating everything you can than getting rid of GC's (I've done > some highly interactive stuff in GC environments and preventing
GC's is
> impractical except over short periods of time, minimizing their > frequency and duration is very doable) One of the things I think I > recently saw that should help you in this regard is FFI memory
pinning
> if you're calling out to external code. >
Beginners mailing list Beginners@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/beginners
beginners@lists.squeakfoundation.org