<div dir="ltr">Thanks Ben and all. I'll keep these posts. </div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jul 16, 2015 at 10:10 AM, Ben Coman <span dir="ltr"><<a href="mailto:btc@openinworld.com" target="_blank">btc@openinworld.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">And another option might be using the Programmable Realtime Unit of<br>
the Beaglebone Black.<br>
For example, tight loop toggling of an LED is 200ns on:<br>
* 1Ghz Arm Cortex-A8 = 200ns<br>
* 200Mhz PRU = 5ns<br>
<br>
ELC 2015 - Enhancing Real-Time Capabilities with the PRU - Rob<br>
Birkett, Texas Instruments<br>
<a href="http://events.linuxfoundation.org/sites/events/files/slides/Enhancing%20RT%20Capabilities%20with%20the%20PRU%20final.pdf" rel="noreferrer" target="_blank">http://events.linuxfoundation.org/sites/events/files/slides/Enhancing%20RT%20Capabilities%20with%20the%20PRU%20final.pdf</a><br>
<a href="https://www.youtube.com/watch?v=plCYsbmMbmY" rel="noreferrer" target="_blank">https://www.youtube.com/watch?v=plCYsbmMbmY</a><br>
<br>
cheers -ben<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
On Tue, Jul 14, 2015 at 7:42 PM, Ben Coman <<a href="mailto:btc@openinworld.com">btc@openinworld.com</a>> wrote:<br>
> Smalltalk FPGA may be of interest...<br>
><br>
> <a href="http://www.slideshare.net/esug/luc-fabresse-iwst2014" rel="noreferrer" target="_blank">http://www.slideshare.net/esug/luc-fabresse-iwst2014</a><br>
><br>
> <a href="http://esug.org/data/ESUG2014/IWST/Papers/iwst2014_From%20Smalltalk%20to%20Silicon_Towards%20a%20methodology%20to%20turn%20Smalltalk%20code%20into%20FPGA.pdf" rel="noreferrer" target="_blank">http://esug.org/data/ESUG2014/IWST/Papers/iwst2014_From%20Smalltalk%20to%20Silicon_Towards%20a%20methodology%20to%20turn%20Smalltalk%20code%20into%20FPGA.pdf</a><br>
><br>
> cheers -ben<br>
><br>
> On Tue, Jul 14, 2015 at 11:55 AM, Kirk Fraser <<a href="mailto:overcomer.man@gmail.com">overcomer.man@gmail.com</a>> wrote:<br>
>> Hi Casey,<br>
>><br>
>> Thanks for the suggestion. I will have multiple connected controller boards<br>
>> and with your suggestion maybe I'll try a Pi for each limb and each webcam<br>
>> or maybe your ARM9 suggestion.<br>
>><br>
>> To prove it's good as a human in performance I want it to do minor<br>
>> acrobatics like cartwheels and balancing tricks, maybe a Chuck Norris kick<br>
>> or jumping over a fence with one hand on a post. Or like those free-running<br>
>> videos. Stuff I could not do myself. But it all waits on money. Maybe I'll<br>
>> make better progress next year when social security kicks in.<br>
>><br>
>> As far as human performance goals, one professor wrote it takes 200 cycles<br>
>> per second to hop on one leg. Somewhere I read human touch can sense 1 in<br>
>> 32,000 of an inch. I don't have the other figures yet. I may not be able<br>
>> to drive an arm as fast as a human boxer - 200 mph but as long as it's fast<br>
>> enough to drive a vehicle on a slow road (not I5) that might be enough until<br>
>> faster computers are here.<br>
>><br>
>> The vision system seems like a major speed bottle neck. Maybe a mini-cloud<br>
>> can take one 64th of the image for each processor analyze it, then assemble<br>
>> larger object detection in time for the next frame. The DARPA Atlas robot<br>
>> used 4 cores for each camera I think. But a mini-cloud set off nearby to<br>
>> process vision and return either the objects with measurements or<br>
>> instructions would be a lot of work. The more I write, the more I see why<br>
>> the head of DARPA's robots said they cost 1-2 million as you have to hire a<br>
>> team of programmers or make a really intricate learning program.<br>
>><br>
>> Kirk<br>
>><br>
>><br>
>> On Mon, Jul 13, 2015 at 5:58 PM, Casey Ransberger <<a href="mailto:casey.obrien.r@gmail.com">casey.obrien.r@gmail.com</a>><br>
>> wrote:<br>
>>><br>
>>> Hey Kirk,<br>
>>><br>
>>> I like Ralph's suggestion of doing the time/timing specific stuff on a<br>
>>> dedicated microcontroller.<br>
>>><br>
>>> I'd recommend going one better: use more than one microcontroller. Robots<br>
>>> need to do a lot in parallel; if the robot has to stop driving in order to<br>
>>> think, that's a problem (although the converse would be decidedly human!)<br>
>>> Anyway, it sounds like real-time is not negotiable in your view, so green<br>
>>> threads won't cut it either.<br>
>>><br>
>>> Mine has... six controllers in total. That's not counting the ARM9 which<br>
>>> is more like a full computer (e.g., Linux.)<br>
>>><br>
>>> I think six anyway. Could be more hiding in there. Two drive sensors,<br>
>>> three drive motors, one is wired up close to the ARM board to coordinate the<br>
>>> other controllers on behalf of what the Linux system wants them doing.<br>
>>><br>
>>> I'm curious, have you figured out what the average, best, and worst case<br>
>>> latencies are on human reflexes? In my view, matching or beating that<br>
>>> benchmark is where the money probably is.<br>
>>><br>
>>> --C<br>
>>><br>
>>> On Jul 6, 2015, at 12:39 PM, Kirk Fraser <<a href="mailto:overcomer.man@gmail.com">overcomer.man@gmail.com</a>> wrote:<br>
>>><br>
>>> Ralph Johnson,<br>
>>><br>
>>> That's an excellent suggestion and an excellent story, thank you very<br>
>>> much! Letting the human interface in Smalltalk program the robot controller<br>
>>> instead of being the robot controller sounds good.<br>
>>><br>
>>> My robot uses a network of Parallax microcontroller chips to drive<br>
>>> hydraulic valves, which can be programmed via USB for simple tasks like<br>
>>> moving one joint from point A to B but since each controller has 8 cores<br>
>>> more complex tasks like grasping or walking can be done on the MCU's or on a<br>
>>> small Raspberry Pi or other hardware in a non-GC or controllable GC<br>
>>> language.<br>
>>><br>
>>> A harder part to wrap my head around is handling the webcam vision system<br>
>>> and artificial intelligence while remaining time sensitive enough to do time<br>
>>> critical tasks like cartwheels and other acrobatic choreography.<br>
>>><br>
>>> I know in effect my human mind shuts down most of its intellectual<br>
>>> pursuits when engaged in heavy physical activity - maybe the robot must do<br>
>>> the same - think more creatively when idling and pay closer attention while<br>
>>> working. That takes care of the Ai timing.<br>
>>><br>
>>> The heavy load of vision processing appears to need a mini-cloud of cores<br>
>>> to reduce time to identify and measure objects from contours and other<br>
>>> information. To guarantee performance they would also need to run a non-GC<br>
>>> language that could be programmed from Squeak interactively as new objects<br>
>>> are being learned. I haven't worked with a laser range finder but I suspect<br>
>>> they use it to narrow the focus onto moving objects to process video in more<br>
>>> detail in those areas.<br>
>>><br>
>>> The current buzzword "co-robots" meaning robots that work beside or<br>
>>> cooperatively with people working in symbiotic relationships with human<br>
>>> partners suggests everyone will need a robot friend, which will require an<br>
>>> artificial intelligence capable of intelligent thought. As most Americans<br>
>>> are Christian it would make sense for a human compatible AI to be based on<br>
>>> the Bible. That is what I would love to work on. But that level of thought<br>
>>> needs a creative CG environment like Squeak at present.<br>
>>><br>
>>> I've been thinking that using a Smalltalk GUI to issue command rules to<br>
>>> set an agenda for automatic text analysis and editing might be fun, letting<br>
>>> the computer do the editing instead of me. That way it could update the AI<br>
>>> knowledge like when a preferred synonym is discovered, without taking human<br>
>>> time to do much of it beyond the setup.<br>
>>><br>
>>> Your wikipedia entry shows a webpage and blog that apparently are dead<br>
>>> links. Would you be interested in being a team member on my SBIR/STTR grant<br>
>>> application(s) for AI and Robots responding to:<br>
>>> <a href="http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf15505" rel="noreferrer" target="_blank">http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf15505</a> I've<br>
>>> enlisted help in writing the application from Oregon's Small Business<br>
>>> Development Center and will meet with an SBIR road trip in August I'm told.<br>
>>> (I was also told I need a Ph.D. on my team since I don't have one.)<br>
>>><br>
>>> Kirk Fraser<br>
>>><br>
>>><br>
>>> On Mon, Jul 6, 2015 at 4:19 AM, Ralph Johnson <<a href="mailto:johnson@cs.uiuc.edu">johnson@cs.uiuc.edu</a>> wrote:<br>
>>>><br>
>>>> Here is another possibility.<br>
>>>><br>
>>>> Take a look at Symbolic Sound, a company that makes a system called Kyma.<br>
>>>> <a href="http://kyma.symbolicsound.com/" rel="noreferrer" target="_blank">http://kyma.symbolicsound.com/</a><br>
>>>><br>
>>>> This company has been around for over twenty years. Its product has<br>
>>>> always been the fastest music synthesis system in the world that gives you<br>
>>>> total control over your sound. And by "total", I mean it gives you the<br>
>>>> ability to mathematically specify each sound wave. If you want, which is<br>
>>>> actually too much detail for most people. And it is all written in<br>
>>>> Smalltalk. Not Squeak, of course, since Squeak wasn't around then. But it<br>
>>>> could have been done in Squeak. And perhaps they ported it to Squeak. I<br>
>>>> haven't talked to them for a long time so I don't know what they did, but<br>
>>>> from the screen shots I think it is still a very old version of VisualWorks.<br>
>>>><br>
>>>> Anyway, how do they make it so fast? How can they make something that<br>
>>>> can be used for hours without any GC pauses?<br>
>>>><br>
>>>> The trick is that the sound is produced on an attached DSP. The GUI is<br>
>>>> in Smalltalk on a PC, and it generates code for the DSP. It is non-trivial<br>
>>>> making the compiler so fast that when you press "play", it can immediately<br>
>>>> start up the DSP and start producing sound. It does this (rather, it did<br>
>>>> this, since they might have changed the way it works) by just producing<br>
>>>> enough code to run the DSP for a few seconds and then starting the DSP while<br>
>>>> it generates the rest of the code. Kyma literally is writing the program<br>
>>>> into DSP memory at the same time as the DSP is running the program,<br>
>>>> producing sound.<br>
>>>><br>
>>>> Anyway, maybe that is the right approach to programming robots. You<br>
>>>> don't even need to use two computers. Imagine you had two computers, one<br>
>>>> running Squeak and the other a simple, real-time machine designed for<br>
>>>> controlling robots, but not very sophisticated. Squeak programs the simple<br>
>>>> computer, and can change its program dynamically. The simple computer has<br>
>>>> no gc. Since Squeak is a VM on a computer, the real-time computer can be a<br>
>>>> VM, too. So, you could be running them both on your PC, or you could run<br>
>>>> them on two separate computers for better performance.<br>
>>>><br>
>>>> I would be happy to talk more about this. But I'd like to talk about the<br>
>>>> beginning of Kyma. The owners of Symbolic Sound are Carla Scaletti and<br>
>>>> Kurt Hebel. Carla has a PhD in music, and Kurt in Electrical Engineering.<br>
>>>> I met Carla after she had her PhD. She wanted to get a MS in computer<br>
>>>> science so she could prove her computer music expertise, and she ended up<br>
>>>> getting it with me. She took my course on OOP&D that used Smalltalk. For<br>
>>>> her class project (back in 1987, I think) she wrote a Smalltalk program that<br>
>>>> ran on the Mac and that produced about ten seconds of sound, but it took<br>
>>>> several minutes to do it. Hardly real time. However, she was used to<br>
>>>> using a supercomputer (a Cray?) to generate sounds that still weren't real<br>
>>>> time, so she was very pleased that she could do it on the Mac at all, and<br>
>>>> though Smalltalk was slower than Fortran, in her opinion the ease of use was<br>
>>>> so great that she didn't mind the speed difference. As she put it, the<br>
>>>> speed difference between a Mac and a Cray was bigger than between Smalltalk<br>
>>>> and Fortran. She ended up turning this into the first version of Kyma and<br>
>>>> that became the subject of her MS thesis. I can remember when she showed<br>
>>>> it in class. She was the only woman in the class, and the other students<br>
>>>> knew she was a musician, i.e. not *really* a programmer. She was quiet<br>
>>>> during class, so they had not had a chance to have their prejudices<br>
>>>> remedied. Her demo at the end of the semester blew them away.<br>
>>>><br>
>>>> Kurt had built a DSP that their lab used. (The lab was part of the<br>
>>>> Plato project, I believe, one of the huge number of creative results of this<br>
>>>> very significant project at Illinois.) It was called the Capybara. This<br>
>>>> was before the time when you could just buy a good DSP on a chip, but that<br>
>>>> time came very soon and then they used the commercial chips. For her MS,<br>
>>>> she converted her system to use the Capybara, and this was when she figured<br>
>>>> out how to make it start making music within a fraction of a second of<br>
>>>> pressing the "play" button. Kurt also used Smalltalk with the Capybara.<br>
>>>> His PhD was about automatically designing digital filters, and his software<br>
>>>> also generated code for the Capybara, though it was actually quite different<br>
>>>> from Kyma.<br>
>>>><br>
>>>> The two of them worked on several different projects over the next few<br>
>>>> years, but kept improving Kyma. Along the way Kurt started building boards<br>
>>>> that had several commercial DSPs on them. Eventually they decided to go<br>
>>>> commercial and started Symbolic Sound.<br>
>>>><br>
>>>> -Ralph Johnson<br>
>>>><br>
>>>> On Sun, Jul 5, 2015 at 9:05 PM, Kirk Fraser <<a href="mailto:overcomer.man@gmail.com">overcomer.man@gmail.com</a>><br>
>>>> wrote:<br>
>>>>><br>
>>>>> >> Tim says a multi-core VM is coming for the new Pi.<br>
>>>>><br>
>>>>> > Are you *sure* that's what Tim said?<br>
>>>>><br>
>>>>> Of course my over hopeful misinterpretation is possible.<br>
>>>>><br>
>>>>> "Squeak runs quite well on a Pi, especially a pi2 - and we're working on<br>
>>>>> the Cog dynamic translation VM right now, which should with luck triple<br>
>>>>> typical performance." - timrowledge » Thu Feb 19, 2015<br>
>>>>><br>
>>>>> <a href="https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698818&hilit=Squeak#p698818" rel="noreferrer" target="_blank">https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698818&hilit=Squeak#p698818</a><br>
>>>>><br>
>>>>> > The trick to getting rid of long delays is more a function of<br>
>>>>> > preallocating everything you can than getting rid of GC's (I've done some<br>
>>>>> > highly interactive stuff in GC environments and preventing GC's is<br>
>>>>> > impractical except over short periods of time, minimizing their frequency<br>
>>>>> > and duration is very doable) One of the things I think I<br>
>>>>> recently saw that should help you in this regard is FFI memory pinning<br>
>>>>> if you're calling out to external code.<br>
>>>>><br>
>>>>> Thanks. Maybe when I find, make, or build a better place to work, I'll<br>
>>>>> be able to tackle some of that. I wouldn't be surprised if a VM is as easy<br>
>>>>> as a compiler once one actually starts working on it.<br>
>>>>><br>
>>>>><br>
>>>>> On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) <<a href="mailto:pbpublist@gmail.com">pbpublist@gmail.com</a>> wrote:<br>
>>>>>><br>
>>>>>> On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote:<br>
>>>>>> > I used Cuis at first to display hand written G-Codes in graphic form<br>
>>>>>> > for a printed circuit board. I kept up with Cuis through a few<br>
>>>>>> > versions and found a couple of bugs for Juan. Eventually Casey<br>
>>>>>> > advised going to Squeak so I did. Perhaps my requests were getting<br>
>>>>>> > annoying.<br>
>>>>>> ><br>
>>>>>><br>
>>>>>> Perhaps you misinterpreted what Casey said? Definitely have all<br>
>>>>>> options<br>
>>>>>> (Squeak, Pharo, Cuis etc.) as part of your toolkit. Squeak in<br>
>>>>>> particular has a very active mailing lists and you'll find a lot of<br>
>>>>>> existing code to play with. I personally do most of my development in<br>
>>>>>> Cuis, some in Pharo (for things like Seaside that don't yet exist in<br>
>>>>>> Cuis), and a bit still in Squeak. They all have their place depending<br>
>>>>>> on your needs. Given your emphasis on performance, I would think that<br>
>>>>>> Cuis is going to be the place where you can maximize it. (all the above<br>
>>>>>> Smalltalk variants use essentially the same core VM, it's the plugins<br>
>>>>>> and images that really differ)<br>
>>>>>><br>
>>>>>> > I'm mostly interested in using a multi-core Squeak with GC control<br>
>>>>>> > for<br>
>>>>>> > my robot. Tim says a multi-core VM is coming for the new Pi. He<br>
>>>>>> > hasn't answered on GC control. With muliti-core a user need not see<br>
>>>>>> > GC control but the system should provide 100% GC free service even if<br>
>>>>>> > behind the scenes it momentarily toggles one GC off and lets the<br>
>>>>>> > other<br>
>>>>>> > complete.<br>
>>>>>> ><br>
>>>>>><br>
>>>>>> Are you *sure* that's what Tim said? I see a thread where he's talking<br>
>>>>>> about *build* performance (i.e. compiling the C code for the VM) on a<br>
>>>>>> quad-core with the caveat 'even if Squeak can't directly take<br>
>>>>>> advantage' (i.e. no multi-core VM)<br>
>>>>>><br>
>>>>>> ><br>
>>>>>> > With real time driving, which I hope my robot will do some day,<br>
>>>>>> > getting rid of all 100ms delays is vital.<br>
>>>>>> ><br>
>>>>>><br>
>>>>>> The trick to getting rid of long delays is more a function of<br>
>>>>>> preallocating everything you can than getting rid of GC's (I've done<br>
>>>>>> some highly interactive stuff in GC environments and preventing GC's is<br>
>>>>>> impractical except over short periods of time, minimizing their<br>
>>>>>> frequency and duration is very doable) One of the things I think I<br>
>>>>>> recently saw that should help you in this regard is FFI memory pinning<br>
>>>>>> if you're calling out to external code.<br>
>>>>>><br>
_______________________________________________<br>
Beginners mailing list<br>
<a href="mailto:Beginners@lists.squeakfoundation.org">Beginners@lists.squeakfoundation.org</a><br>
<a href="http://lists.squeakfoundation.org/mailman/listinfo/beginners" rel="noreferrer" target="_blank">http://lists.squeakfoundation.org/mailman/listinfo/beginners</a><br>
</div></div></blockquote></div><br></div>