[Newbies] Raspberry Pi v. Raspberry St

Kirk Fraser overcomer.man at gmail.com
Fri Jul 17 05:32:38 UTC 2015


Thanks Ben and all.  I'll keep these posts.

On Thu, Jul 16, 2015 at 10:10 AM, Ben Coman <btc at openinworld.com> wrote:

> And another option might be using the Programmable Realtime Unit of
> the Beaglebone Black.
> For example, tight loop toggling of an LED is 200ns on:
> * 1Ghz Arm Cortex-A8 = 200ns
> * 200Mhz PRU = 5ns
>
> ELC 2015 - Enhancing Real-Time Capabilities with the PRU - Rob
> Birkett, Texas Instruments
>
> http://events.linuxfoundation.org/sites/events/files/slides/Enhancing%20RT%20Capabilities%20with%20the%20PRU%20final.pdf
> https://www.youtube.com/watch?v=plCYsbmMbmY
>
> cheers -ben
>
>
> On Tue, Jul 14, 2015 at 7:42 PM, Ben Coman <btc at openinworld.com> wrote:
> > Smalltalk FPGA may be of interest...
> >
> > http://www.slideshare.net/esug/luc-fabresse-iwst2014
> >
> >
> http://esug.org/data/ESUG2014/IWST/Papers/iwst2014_From%20Smalltalk%20to%20Silicon_Towards%20a%20methodology%20to%20turn%20Smalltalk%20code%20into%20FPGA.pdf
> >
> > cheers -ben
> >
> > On Tue, Jul 14, 2015 at 11:55 AM, Kirk Fraser <overcomer.man at gmail.com>
> wrote:
> >> Hi Casey,
> >>
> >> Thanks for the suggestion.  I will have multiple connected controller
> boards
> >> and with your suggestion maybe I'll try a Pi for each limb and each
> webcam
> >> or maybe your ARM9 suggestion.
> >>
> >> To prove it's good as a human in performance I want it to do minor
> >> acrobatics like cartwheels and balancing tricks, maybe a Chuck Norris
> kick
> >> or jumping over a fence with one hand on a post.  Or like those
> free-running
> >> videos. Stuff I could not do myself. But it all waits on money.  Maybe
> I'll
> >> make better progress next year when social security kicks in.
> >>
> >> As far as human performance goals, one professor wrote it takes 200
> cycles
> >> per second to hop on one leg.  Somewhere I read human touch can sense 1
> in
> >> 32,000 of an inch.  I don't have the other figures yet.  I may not be
> able
> >> to drive an arm as fast as a human boxer - 200 mph but as long as it's
> fast
> >> enough to drive a vehicle on a slow road (not I5) that might be enough
> until
> >> faster computers are here.
> >>
> >> The vision system seems like a major speed bottle neck.  Maybe a
> mini-cloud
> >> can take one 64th of the image for each processor analyze it, then
> assemble
> >> larger object detection in time for the next frame.  The DARPA Atlas
> robot
> >> used 4 cores for each camera I think.  But a mini-cloud set off nearby
> to
> >> process vision and return either the objects with measurements or
> >> instructions would be a lot of work.   The more I write, the more I see
> why
> >> the head of DARPA's robots said they cost 1-2 million as you have to
> hire a
> >> team of programmers or make a really intricate learning program.
> >>
> >> Kirk
> >>
> >>
> >> On Mon, Jul 13, 2015 at 5:58 PM, Casey Ransberger <
> casey.obrien.r at gmail.com>
> >> wrote:
> >>>
> >>> Hey Kirk,
> >>>
> >>> I like Ralph's suggestion of doing the time/timing specific stuff on a
> >>> dedicated microcontroller.
> >>>
> >>> I'd recommend going one better: use more than one microcontroller.
> Robots
> >>> need to do a lot in parallel; if the robot has to stop driving in
> order to
> >>> think, that's a problem (although the converse would be decidedly
> human!)
> >>> Anyway, it sounds like real-time is not negotiable in your view, so
> green
> >>> threads won't cut it either.
> >>>
> >>> Mine has... six controllers in total. That's not counting the ARM9
> which
> >>> is more like a full computer (e.g., Linux.)
> >>>
> >>> I think six anyway. Could be more hiding in there. Two drive sensors,
> >>> three drive motors, one is wired up close to the ARM board to
> coordinate the
> >>> other controllers on behalf of what the Linux system wants them doing.
> >>>
> >>> I'm curious, have you figured out what the average, best, and worst
> case
> >>> latencies are on human reflexes? In my view, matching or beating that
> >>> benchmark is where the money probably is.
> >>>
> >>> --C
> >>>
> >>> On Jul 6, 2015, at 12:39 PM, Kirk Fraser <overcomer.man at gmail.com>
> wrote:
> >>>
> >>> Ralph Johnson,
> >>>
> >>> That's an excellent suggestion and an excellent story, thank you very
> >>> much!  Letting the human interface in Smalltalk program the robot
> controller
> >>> instead of being the robot controller sounds good.
> >>>
> >>> My robot uses a network of Parallax microcontroller chips to drive
> >>> hydraulic valves, which can be programmed via USB for simple tasks like
> >>> moving one joint from point A to B but since each controller has 8
> cores
> >>> more complex tasks like grasping or walking can be done on the MCU's
> or on a
> >>> small Raspberry Pi or other hardware in a non-GC or controllable GC
> >>> language.
> >>>
> >>> A harder part to wrap my head around is handling the webcam vision
> system
> >>> and artificial intelligence while remaining time sensitive enough to
> do time
> >>> critical tasks like cartwheels and other acrobatic choreography.
> >>>
> >>> I know in effect my human mind shuts down most of its intellectual
> >>> pursuits when engaged in heavy physical activity - maybe the robot
> must do
> >>> the same - think more creatively when idling and pay closer attention
> while
> >>> working. That takes care of the Ai timing.
> >>>
> >>> The heavy load of vision processing appears to need a mini-cloud of
> cores
> >>> to reduce time to identify and measure objects from contours and other
> >>> information.  To guarantee performance they would also need to run a
> non-GC
> >>> language that could be programmed from Squeak interactively as new
> objects
> >>> are being learned.  I haven't worked with a laser range finder but I
> suspect
> >>> they use it to narrow the focus onto moving objects to process video
> in more
> >>> detail in those areas.
> >>>
> >>> The current buzzword "co-robots" meaning robots that work beside or
> >>> cooperatively with people working in symbiotic relationships with human
> >>> partners suggests everyone will need a robot friend, which will
> require an
> >>> artificial intelligence capable of intelligent thought.  As most
> Americans
> >>> are Christian it would make sense for a human compatible AI to be
> based on
> >>> the Bible.  That is what I would love to work on.  But that level of
> thought
> >>> needs a creative CG environment like Squeak at present.
> >>>
> >>> I've been thinking that using a Smalltalk GUI to issue command rules to
> >>> set an agenda for automatic text analysis and editing might be fun,
> letting
> >>> the computer do the editing instead of me.  That way it could update
> the AI
> >>> knowledge like when a preferred synonym is discovered, without taking
> human
> >>> time to do much of it beyond the setup.
> >>>
> >>> Your wikipedia entry shows a webpage and blog that apparently are dead
> >>> links.  Would you be interested in being a team member on my SBIR/STTR
> grant
> >>> application(s) for AI and Robots responding to:
> >>> http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf15505  I've
> >>> enlisted help in writing the application from Oregon's Small Business
> >>> Development Center and will meet with an SBIR road trip in August I'm
> told.
> >>> (I was also told I need a Ph.D. on my team since I don't have one.)
> >>>
> >>> Kirk Fraser
> >>>
> >>>
> >>> On Mon, Jul 6, 2015 at 4:19 AM, Ralph Johnson <johnson at cs.uiuc.edu>
> wrote:
> >>>>
> >>>> Here is another possibility.
> >>>>
> >>>> Take a look at Symbolic Sound, a company that makes a system called
> Kyma.
> >>>> http://kyma.symbolicsound.com/
> >>>>
> >>>> This company has been around for over twenty years.   Its product has
> >>>> always been the fastest music synthesis system in the world that
> gives you
> >>>> total control over your sound.  And by "total", I mean it gives you
> the
> >>>> ability to mathematically specify each sound wave.   If you want,
> which is
> >>>> actually too much detail for most people.   And it is all written in
> >>>> Smalltalk.  Not Squeak, of course, since Squeak wasn't around then.
>  But it
> >>>> could have been done in Squeak.   And perhaps they ported it to
> Squeak.   I
> >>>> haven't talked to them for a long time so I don't know what they did,
> but
> >>>> from the screen shots I think it is still a very old version of
> VisualWorks.
> >>>>
> >>>> Anyway, how do they make it so fast?  How can they make something that
> >>>> can be used for hours without any GC pauses?
> >>>>
> >>>> The trick is that the sound is produced on an attached DSP.   The GUI
> is
> >>>> in Smalltalk on a PC, and it generates code for the DSP.   It is
> non-trivial
> >>>> making the compiler so fast that when you press "play", it can
> immediately
> >>>> start up the DSP and start producing sound.  It does this (rather, it
> did
> >>>> this, since they might have changed the way it works) by just
> producing
> >>>> enough code to run the DSP for a few seconds and then starting the
> DSP while
> >>>> it generates the rest of the code.   Kyma literally is writing the
> program
> >>>> into DSP memory at the same time as the DSP is running the program,
> >>>> producing sound.
> >>>>
> >>>> Anyway, maybe that is the right approach to programming robots.   You
> >>>> don't even need to use two computers.   Imagine you had two
> computers, one
> >>>> running Squeak and the other a simple, real-time machine designed for
> >>>> controlling robots, but not very sophisticated.  Squeak programs the
> simple
> >>>> computer, and can change its program dynamically.  The simple
> computer has
> >>>> no gc.   Since Squeak is a VM on a computer, the real-time computer
> can be a
> >>>> VM, too.  So, you could be running them both on your PC, or you could
> run
> >>>> them on two separate computers for better performance.
> >>>>
> >>>> I would be happy to talk more about this.  But I'd like to talk about
> the
> >>>> beginning of Kyma.   The owners of Symbolic Sound are Carla Scaletti
> and
> >>>> Kurt Hebel.   Carla has a PhD in music, and Kurt in Electrical
> Engineering.
> >>>> I met Carla after she had her PhD.  She wanted to get a MS in computer
> >>>> science so she could prove her computer music expertise, and she
> ended up
> >>>> getting it with me.   She took my course on OOP&D that used
> Smalltalk.  For
> >>>> her class project (back in 1987, I think) she wrote a Smalltalk
> program that
> >>>> ran on the Mac and that produced about ten seconds of sound, but it
> took
> >>>> several minutes to do it.   Hardly real time.   However, she was used
> to
> >>>> using a supercomputer (a Cray?) to generate sounds that still weren't
> real
> >>>> time, so she was very pleased that she could do it on the Mac at all,
> and
> >>>> though Smalltalk was slower than Fortran, in her opinion the ease of
> use was
> >>>> so great that she didn't mind the speed difference.   As she put it,
> the
> >>>> speed difference between a Mac and a Cray was bigger than between
> Smalltalk
> >>>> and Fortran.  She ended up turning this into the first version of
> Kyma and
> >>>> that became the subject of her MS thesis.   I can remember when she
> showed
> >>>> it in class.  She was the only woman in the class, and the other
> students
> >>>> knew she was a musician, i.e. not *really* a programmer.  She was
> quiet
> >>>> during class, so they had not had a chance to have their prejudices
> >>>> remedied.  Her demo at the end of the semester blew them away.
> >>>>
> >>>> Kurt had built a DSP that their lab used.   (The lab was part of the
> >>>> Plato project, I believe, one of the huge number of creative results
> of this
> >>>> very significant project at Illinois.)   It was called the Capybara.
> This
> >>>> was before the time when you could just buy a good DSP on a chip, but
> that
> >>>> time came very soon and then they used the commercial chips.  For her
> MS,
> >>>> she converted her system to use the Capybara, and this was when she
> figured
> >>>> out how to make it start making music within a fraction of a second of
> >>>> pressing the "play" button.  Kurt also used Smalltalk with the
> Capybara.
> >>>> His PhD was about automatically designing digital filters, and his
> software
> >>>> also generated code for the Capybara, though it was actually quite
> different
> >>>> from Kyma.
> >>>>
> >>>> The two of them worked on several different projects over the next few
> >>>> years, but kept improving Kyma.   Along the way Kurt started building
> boards
> >>>> that had several commercial DSPs on them.  Eventually they decided to
> go
> >>>> commercial and started Symbolic Sound.
> >>>>
> >>>> -Ralph Johnson
> >>>>
> >>>> On Sun, Jul 5, 2015 at 9:05 PM, Kirk Fraser <overcomer.man at gmail.com>
> >>>> wrote:
> >>>>>
> >>>>> >> Tim says a multi-core VM is coming for the new Pi.
> >>>>>
> >>>>> > Are you *sure* that's what Tim said?
> >>>>>
> >>>>> Of course my over hopeful misinterpretation is possible.
> >>>>>
> >>>>> "Squeak runs quite well on a Pi, especially a pi2 - and we're
> working on
> >>>>> the Cog dynamic translation VM right now, which should with luck
> triple
> >>>>> typical performance."  - timrowledge » Thu Feb 19, 2015
> >>>>>
> >>>>>
> https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=100804&p=698818&hilit=Squeak#p698818
> >>>>>
> >>>>> > The trick to getting rid of long delays is more a function of
> >>>>> > preallocating everything you can than getting rid of GC's (I've
> done some
> >>>>> > highly interactive stuff in GC environments and preventing GC's is
> >>>>> > impractical except over short periods of time, minimizing their
> frequency
> >>>>> > and duration is very doable)  One of the things I think I
> >>>>> recently saw that should help you in this regard is FFI memory
> pinning
> >>>>> if you're calling out to external code.
> >>>>>
> >>>>> Thanks.  Maybe when I find, make, or build a better place to work,
> I'll
> >>>>> be able to tackle some of that.  I wouldn't be surprised if a VM is
> as easy
> >>>>> as a compiler once one actually starts working on it.
> >>>>>
> >>>>>
> >>>>> On Sun, Jul 5, 2015 at 6:31 PM, Phil (list) <pbpublist at gmail.com>
> wrote:
> >>>>>>
> >>>>>> On Sun, 2015-07-05 at 17:12 -0700, Kirk Fraser wrote:
> >>>>>> > I used Cuis at first to display hand written G-Codes in graphic
> form
> >>>>>> > for a printed circuit board.  I kept up with Cuis through a few
> >>>>>> > versions and found a couple of bugs for Juan.  Eventually Casey
> >>>>>> > advised going to Squeak so I did. Perhaps my requests were getting
> >>>>>> > annoying.
> >>>>>> >
> >>>>>>
> >>>>>> Perhaps you misinterpreted what Casey said?  Definitely have all
> >>>>>> options
> >>>>>> (Squeak, Pharo, Cuis etc.) as part of your toolkit.  Squeak in
> >>>>>> particular has a very active mailing lists and you'll find a lot of
> >>>>>> existing code to play with.  I personally do most of my development
> in
> >>>>>> Cuis, some in Pharo (for things like Seaside that don't yet exist in
> >>>>>> Cuis), and a bit still in Squeak.  They all have their place
> depending
> >>>>>> on your needs.  Given your emphasis on performance, I would think
> that
> >>>>>> Cuis is going to be the place where you can maximize it. (all the
> above
> >>>>>> Smalltalk variants use essentially the same core VM, it's the
> plugins
> >>>>>> and images that really differ)
> >>>>>>
> >>>>>> > I'm mostly interested in using a multi-core Squeak with GC control
> >>>>>> > for
> >>>>>> > my robot.  Tim says a multi-core VM is coming for the new Pi.  He
> >>>>>> > hasn't answered on GC control.  With muliti-core a user need not
> see
> >>>>>> > GC control but the system should provide 100% GC free service
> even if
> >>>>>> > behind the scenes it momentarily toggles one GC off and lets the
> >>>>>> > other
> >>>>>> > complete.
> >>>>>> >
> >>>>>>
> >>>>>> Are you *sure* that's what Tim said?  I see a thread where he's
> talking
> >>>>>> about *build* performance (i.e. compiling the C code for the VM) on
> a
> >>>>>> quad-core with the caveat 'even if Squeak can't directly take
> >>>>>> advantage' (i.e. no multi-core VM)
> >>>>>>
> >>>>>> >
> >>>>>> > With real time driving, which I hope my robot will do some day,
> >>>>>> > getting rid of all 100ms delays is vital.
> >>>>>> >
> >>>>>>
> >>>>>> The trick to getting rid of long delays is more a function of
> >>>>>> preallocating everything you can than getting rid of GC's (I've done
> >>>>>> some highly interactive stuff in GC environments and preventing
> GC's is
> >>>>>> impractical except over short periods of time, minimizing their
> >>>>>> frequency and duration is very doable)  One of the things I think I
> >>>>>> recently saw that should help you in this regard is FFI memory
> pinning
> >>>>>> if you're calling out to external code.
> >>>>>>
> _______________________________________________
> Beginners mailing list
> Beginners at lists.squeakfoundation.org
> http://lists.squeakfoundation.org/mailman/listinfo/beginners
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/beginners/attachments/20150716/c21a1d51/attachment-0001.htm


More information about the Beginners mailing list