I've been enjoying bringing some packages up to date recently and the WeatherStation project is much improved. I know a few people out there have either used it or been inspired by some of the code ideas, so it seems like a good time to update everyone.
First job was to connect up a proper database for storing the data, rather than the "leave it in the image and hope" approach. Since I had been using Levente's PostgresV3 package for work (more on that later) it seemed an obvious thing to use in a less... convoluted... manner as a learning exercise. The current system is using a simple pair of SQL tables, one small table for a list of sensors in use, and a rather bigger table with the sensor readings. I chose to have a 'type' column to discriminate between temperature, humidity, pressure, windspeed, rainfall & wind direction readings, though I imagine there might be good reasons to have separate tables for each despite the fact they would be of the same layout. One column for each reading is a foreign key pointing to the relevant sensor entry and I can't tell you how smug I felt when I got that to work. Those of you that know me well, know I've worked *quite hard* to avoid dealing with databases for about 40 years, so all this was quite an adventure.
No. 2 was making sure the MQTT system still works in Squeak 6.1alpha - which it seems to do perfectly well. Which is nice. I hope to update it to the MQTT v5.0 spec sometime.
Part 3 was making a subclass of PlotMorph that reads the data from a database, which turns out to be quite effective. Having a query that fetches all the data and timestamps within a range and for a specific sensor & reading type is pretty neat, and the performance is, frankly, amazing. A few tens of thousands of rows can be read and instantiated in 50mS or so *on a Raspberry Pi*. Combine this with having a step time of 60 seconds and the cost of the DB read is negligible. Even fetching 1.5million rows as a test only takes about 12 seconds.
So, the PlotMorphs package has been slightly updated to support the DB subclass, the MQTT package is 'certified' as being 6.1 compatible, and the Weather Station package is updated quite a bit.
Thing is, there's a *lot* of stuff you could monitor and track with some MQTT and a database and a way to display it.
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: SEOB: Set Every Other Bit
Thank you!! :-) Am 20.09.2023 02:58:34 schrieb Tim Rowledge tim@rowledge.org: I've been enjoying bringing some packages up to date recently and the WeatherStation project is much improved. I know a few people out there have either used it or been inspired by some of the code ideas, so it seems like a good time to update everyone.
First job was to connect up a proper database for storing the data, rather than the "leave it in the image and hope" approach. Since I had been using Levente's PostgresV3 package for work (more on that later) it seemed an obvious thing to use in a less... convoluted... manner as a learning exercise. The current system is using a simple pair of SQL tables, one small table for a list of sensors in use, and a rather bigger table with the sensor readings. I chose to have a 'type' column to discriminate between temperature, humidity, pressure, windspeed, rainfall & wind direction readings, though I imagine there might be good reasons to have separate tables for each despite the fact they would be of the same layout. One column for each reading is a foreign key pointing to the relevant sensor entry and I can't tell you how smug I felt when I got that to work. Those of you that know me well, know I've worked *quite hard* to avoid dealing with databases for about 40 years, so all thi s was qu ite an adventure.
No. 2 was making sure the MQTT system still works in Squeak 6.1alpha - which it seems to do perfectly well. Which is nice. I hope to update it to the MQTT v5.0 spec sometime.
Part 3 was making a subclass of PlotMorph that reads the data from a database, which turns out to be quite effective. Having a query that fetches all the data and timestamps within a range and for a specific sensor & reading type is pretty neat, and the performance is, frankly, amazing. A few tens of thousands of rows can be read and instantiated in 50mS or so *on a Raspberry Pi*. Combine this with having a step time of 60 seconds and the cost of the DB read is negligible. Even fetching 1.5million rows as a test only takes about 12 seconds.
So, the PlotMorphs package has been slightly updated to support the DB subclass, the MQTT package is 'certified' as being 6.1 compatible, and the Weather Station package is updated quite a bit.
Thing is, there's a *lot* of stuff you could monitor and track with some MQTT and a database and a way to display it.
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: SEOB: Set Every Other Bit
good stuff,
thx
---- On Tue, 19 Sep 2023 20:58:16 -0400 Tim Rowledge tim@rowledge.org wrote ---
I've been enjoying bringing some packages up to date recently and the WeatherStation project is much improved. I know a few people out there have either used it or been inspired by some of the code ideas, so it seems like a good time to update everyone.
First job was to connect up a proper database for storing the data, rather than the "leave it in the image and hope" approach. Since I had been using Levente's PostgresV3 package for work (more on that later) it seemed an obvious thing to use in a less... convoluted... manner as a learning exercise. The current system is using a simple pair of SQL tables, one small table for a list of sensors in use, and a rather bigger table with the sensor readings. I chose to have a 'type' column to discriminate between temperature, humidity, pressure, windspeed, rainfall & wind direction readings, though I imagine there might be good reasons to have separate tables for each despite the fact they would be of the same layout. One column for each reading is a foreign key pointing to the relevant sensor entry and I can't tell you how smug I felt when I got that to work. Those of you that know me well, know I've worked *quite hard* to avoid dealing with databases for about 40 years, so all this was qu ite an adventure.
No. 2 was making sure the MQTT system still works in Squeak 6.1alpha - which it seems to do perfectly well. Which is nice. I hope to update it to the MQTT v5.0 spec sometime.
Part 3 was making a subclass of PlotMorph that reads the data from a database, which turns out to be quite effective. Having a query that fetches all the data and timestamps within a range and for a specific sensor & reading type is pretty neat, and the performance is, frankly, amazing. A few tens of thousands of rows can be read and instantiated in 50mS or so *on a Raspberry Pi*. Combine this with having a step time of 60 seconds and the cost of the DB read is negligible. Even fetching 1.5million rows as a test only takes about 12 seconds.
So, the PlotMorphs package has been slightly updated to support the DB subclass, the MQTT package is 'certified' as being 6.1 compatible, and the Weather Station package is updated quite a bit.
Thing is, there's a *lot* of stuff you could monitor and track with some MQTT and a database and a way to display it.
tim
I've been enjoying bringing some packages up to date recently and the WeatherStation project is much improved. I know a few people out there have either used it or been inspired by some of the code ideas, so it seems like a good time to update everyone.
And now even the swiki page is a bit more up to date. http://wiki.squeak.org/squeak/6573
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: SDR: Shift Disk Right
WeatherStation is quite substantially updated, with the mqtt fetch/process/store separated from the db fetch/display so that separate machines can be used and multiples of each can work together. It even loads from SM now!
And the swiki page is updated to explain it all as best I could.
On 2023-09-25, at 1:04 PM, Tim Rowledge tim@rowledge.org wrote:
I've been enjoying bringing some packages up to date recently and the WeatherStation project is much improved. I know a few people out there have either used it or been inspired by some of the code ideas, so it seems like a good time to update everyone.
And now even the swiki page is a bit more up to date. http://wiki.squeak.org/squeak/6573
tim
tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: SDR: Shift Disk Right
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim 'Profanity: the universal programming language'
Thank you for all the work Tim!
All the best,
Ron Teitelbaum
On Thu, Oct 19, 2023 at 8:03 PM Tim Rowledge tim@rowledge.org wrote:
WeatherStation is quite substantially updated, with the mqtt fetch/process/store separated from the db fetch/display so that separate machines can be used and multiples of each can work together. It even loads from SM now!
And the swiki page is updated to explain it all as best I could.
On 2023-09-25, at 1:04 PM, Tim Rowledge tim@rowledge.org wrote:
I've been enjoying bringing some packages up to date recently and the
WeatherStation project is much improved. I know a few people out there have either used it or been inspired by some of the code ideas, so it seems like a good time to update everyone.
And now even the swiki page is a bit more up to date. http://wiki.squeak.org/squeak/6573
tim
tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: SDR: Shift Disk Right
tim
tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim 'Profanity: the universal programming language'
Hi Tim,
thanks for this interesting report, we need more of these! :-)
First job was to connect up a proper database for storing the data, rather than the "leave it in the image and hope" approach. ... Those of you that know me well, know I've worked *quite hard* to avoid dealing with databases for about 40 years, so all this was quite an adventure.
Hm ... that's actually interesting. I feel your point with avoiding external databases very much. For my Telegram bot, I took the chance, and of course, lost data. :-) So the image itself does not qualify to you as a database due to the lack of atomic transaction semantics (i.e., if the image crashes before it is saved or "committed" with new (MQTT) data, these data won't arrive later again in the system)? Why can't we commit/save the image with a much higher frequency? Is it because there is other state in the system that should not be stored? Or is the snapshotPrimitive just too slow? I wonder whether ImageSegments could be a solution to that problem.
Best, Christoph
--- Sent from Squeak Inbox Talk
On 2023-09-19T17:58:16-07:00, tim@rowledge.org wrote:
I've been enjoying bringing some packages up to date recently and the WeatherStation project is much improved. I know a few people out there have either used it or been inspired by some of the code ideas, so it seems like a good time to update everyone.
First job was to connect up a proper database for storing the data, rather than the "leave it in the image and hope" approach. Since I had been using Levente's PostgresV3 package for work (more on that later) it seemed an obvious thing to use in a less... convoluted... manner as a learning exercise. The current system is using a simple pair of SQL tables, one small table for a list of sensors in use, and a rather bigger table with the sensor readings. I chose to have a 'type' column to discriminate between temperature, humidity, pressure, windspeed, rainfall & wind direction readings, though I imagine there might be good reasons to have separate tables for each despite the fact they would be of the same layout. One column for each reading is a foreign key pointing to the relevant sensor entry and I can't tell you how smug I felt when I got that to work. Those of you that know me well, know I've worked *quite hard* to avoid dealing with databases for about 40 years, so all this was quite an adventure.
No. 2 was making sure the MQTT system still works in Squeak 6.1alpha - which it seems to do perfectly well. Which is nice. I hope to update it to the MQTT v5.0 spec sometime.
Part 3 was making a subclass of PlotMorph that reads the data from a database, which turns out to be quite effective. Having a query that fetches all the data and timestamps within a range and for a specific sensor & reading type is pretty neat, and the performance is, frankly, amazing. A few tens of thousands of rows can be read and instantiated in 50mS or so *on a Raspberry Pi*. Combine this with having a step time of 60 seconds and the cost of the DB read is negligible. Even fetching 1.5million rows as a test only takes about 12 seconds.
So, the PlotMorphs package has been slightly updated to support the DB subclass, the MQTT package is 'certified' as being 6.1 compatible, and the Weather Station package is updated quite a bit.
Thing is, there's a *lot* of stuff you could monitor and track with some MQTT and a database and a way to display it.
tim
tim Rowledge; tim(a)rowledge.org; http://www.rowledge.org/tim Strange OpCodes: SEOB: Set Every Other Bit
My apologies. Have you looked at Kafka? There are no tests, so I’ve no idea if it works right.
Durable decentralized particitioned, replicated ordered replayable event queues. Pretty sweet!
http://www.squeaksource.com/Kafka/KAFKA%20Client-rabbit.8.mcz
Installer ss project: ‘Kafka’; install: ‘KAFKA Client’.
••• rabbit ❤️🔥🐰
On Wed, Oct 25, 2023 at 09:17, <[christoph.thiede@student.hpi.uni-potsdam.de](mailto:On Wed, Oct 25, 2023 at 09:17, <<a href=)> wrote:
Hi Tim,
thanks for this interesting report, we need more of these! :-)
First job was to connect up a proper database for storing the data, rather than the "leave it in the image and hope" approach. ... Those of you that know me well, know I've worked *quite hard* to avoid dealing with databases for about 40 years, so all this was quite an adventure.
Hm ... that's actually interesting. I feel your point with avoiding external databases very much. For my Telegram bot, I took the chance, and of course, lost data. :-) So the image itself does not qualify to you as a database due to the lack of atomic transaction semantics (i.e., if the image crashes before it is saved or "committed" with new (MQTT) data, these data won't arrive later again in the system)? Why can't we commit/save the image with a much higher frequency? Is it because there is other state in the system that should not be stored? Or is the snapshotPrimitive just too slow? I wonder whether ImageSegments could be a solution to that problem.
Best, Christoph
Sent from Squeak Inbox Talk
On 2023-09-19T17:58:16-07:00, tim@rowledge.org wrote:
I've been enjoying bringing some packages up to date recently and the WeatherStation project is much improved. I know a few people out there have either used it or been inspired by some of the code ideas, so it seems like a good time to update everyone.
First job was to connect up a proper database for storing the data, rather than the "leave it in the image and hope" approach. Since I had been using Levente's PostgresV3 package for work (more on that later) it seemed an obvious thing to use in a less... convoluted... manner as a learning exercise. The current system is using a simple pair of SQL tables, one small table for a list of sensors in use, and a rather bigger table with the sensor readings. I chose to have a 'type' column to discriminate between temperature, humidity, pressure, windspeed, rainfall & wind direction readings, though I imagine there might be good reasons to have separate tables for each despite the fact they would be of the same layout. One column for each reading is a foreign key pointing to the relevant sensor entry and I can't tell you how smug I felt when I got that to work. Those of you that know me well, know I've worked *quite hard* to avoid dealing with databases for about 40 years, so all this was
quite an adventure.
No. 2 was making sure the MQTT system still works in Squeak 6.1alpha - which it seems to do perfectly well. Which is nice. I hope to update it to the MQTT v5.0 spec sometime.
Part 3 was making a subclass of PlotMorph that reads the data from a database, which turns out to be quite effective. Having a query that fetches all the data and timestamps within a range and for a specific sensor & reading type is pretty neat, and the performance is, frankly, amazing. A few tens of thousands of rows can be read and instantiated in 50mS or so *on a Raspberry Pi*. Combine this with having a step time of 60 seconds and the cost of the DB read is negligible. Even fetching 1.5million rows as a test only takes about 12 seconds.
So, the PlotMorphs package has been slightly updated to support the DB subclass, the MQTT package is 'certified' as being 6.1 compatible, and the Weather Station package is updated quite a bit.
Thing is, there's a *lot* of stuff you could monitor and track with some MQTT and a database and a way to display it.
tim
tim Rowledge; tim(a)rowledge.org; http://www.rowledge.org/tim Strange OpCodes: SEOB: Set Every Other Bit
Ooops, I directed you incorrectly. I have folded Kafka into Crypto-rabbt.70. Please let the work be done there.
rabbit
On 10/25/23 09:37, rabbit wrote:
My apologies. Have you looked at Kafka? There are no tests, so I’ve no idea if it works right.
Durable decentralized particitioned, replicated ordered replayable event queues. Pretty sweet!
http://www.squeaksource.com/Kafka/KAFKA%20Client-rabbit.8.mcz
Installer ss project: ‘Kafka’; install: ‘KAFKA Client’.
••• rabbit ❤️🔥🐰
On Wed, Oct 25, 2023 at 09:17, <[christoph.thiede@student.hpi.uni-potsdam.de](mailto:On Wed, Oct 25, 2023 at 09:17, <<a href=)> wrote:
Hi Tim,
thanks for this interesting report, we need more of these! :-)
First job was to connect up a proper database for storing the data, rather than the "leave it in the image and hope" approach. ... Those of you that know me well, know I've worked *quite hard* to avoid dealing with databases for about 40 years, so all this was quite an adventure.
Hm ... that's actually interesting. I feel your point with avoiding external databases very much. For my Telegram bot, I took the chance, and of course, lost data. :-) So the image itself does not qualify to you as a database due to the lack of atomic transaction semantics (i.e., if the image crashes before it is saved or "committed" with new (MQTT) data, these data won't arrive later again in the system)? Why can't we commit/save the image with a much higher frequency? Is it because there is other state in the system that should not be stored? Or is the snapshotPrimitive just too slow? I wonder whether ImageSegments could be a solution to that problem.
Best, Christoph
Sent from Squeak Inbox Talk
On 2023-09-19T17:58:16-07:00, tim@rowledge.org wrote:
I've been enjoying bringing some packages up to date recently and the WeatherStation project is much improved. I know a few people out there have either used it or been inspired by some of the code ideas, so it seems like a good time to update everyone.
First job was to connect up a proper database for storing the data, rather than the "leave it in the image and hope" approach. Since I had been using Levente's PostgresV3 package for work (more on that later) it seemed an obvious thing to use in a less... convoluted... manner as a learning exercise. The current system is using a simple pair of SQL tables, one small table for a list of sensors in use, and a rather bigger table with the sensor readings. I chose to have a 'type' column to discriminate between temperature, humidity, pressure, windspeed, rainfall & wind direction readings, though I imagine there might be good reasons to have separate tables for each despite the fact they would be of the same layout. One column for each reading is a foreign key pointing to the relevant sensor entry and I can't tell you how smug I felt when I got that to work. Those of you that know me well, know I've worked *quite hard* to avoid dealing with databases for about 40 years, so all this was
quite an adventure.
No. 2 was making sure the MQTT system still works in Squeak 6.1alpha - which it seems to do perfectly well. Which is nice. I hope to update it to the MQTT v5.0 spec sometime.
Part 3 was making a subclass of PlotMorph that reads the data from a database, which turns out to be quite effective. Having a query that fetches all the data and timestamps within a range and for a specific sensor & reading type is pretty neat, and the performance is, frankly, amazing. A few tens of thousands of rows can be read and instantiated in 50mS or so *on a Raspberry Pi*. Combine this with having a step time of 60 seconds and the cost of the DB read is negligible. Even fetching 1.5million rows as a test only takes about 12 seconds.
So, the PlotMorphs package has been slightly updated to support the DB subclass, the MQTT package is 'certified' as being 6.1 compatible, and the Weather Station package is updated quite a bit.
Thing is, there's a *lot* of stuff you could monitor and track with some MQTT and a database and a way to display it.
tim
tim Rowledge; tim(a)rowledge.org; http://www.rowledge.org/tim Strange OpCodes: SEOB: Set Every Other Bit
-- ••• rabbit ❤️🔥🐰
Hi -
On 2023-10-25, at 6:17 AM, christoph.thiede@student.hpi.uni-potsdam.de wrote: So the image itself does not qualify to you as a database due to the lack of atomic transaction semantics (i.e., if the image crashes before it is saved or "committed" with new (MQTT) data, these data won't arrive later again in the system)?
Well, partly. Remember I've been doing Smalltalk for a *very* long time and frequently working on VM development and the low-level image stuff, so I have seen quite a lot of system crashes that would have lost data. Not to mention the occasional actual machine releasing its magic smoke. This tends to make you a bit nervous about only keeping info in an image.
Also my early experience with trying to use databases was... traumatic. I was an IBM research Fellow and working mostly on UIs for 3D design systems but one of the core parts of any solid modeller is the database of all the geometry and so forth. I had to try to make at least some sense of it. IBM's database stuff in the early '80s was a bit, well, confusing. Especially in the research centre where I worked. I did quite like the idea of the binary set-entity database that we had running on an early Transputing Surface (https://en.wikipedia.org/wiki/Meiko_Scientific and good grief I used to know Miles Chesney) and I swear I actually sort of understood the ideas once upon a time.
Then at ParcPlace circa 1994 the objectworks/visualworks product was deemed to require a connection to commercial databases for Business Reasons That Mere Software Engineers Could Not Possibly Understand. That became sometihng of a death-march project and I was very pleased not to be in charge of that one.
So, after working a bit with Levente and his version of a postgresql connection for Squeak (with side steps into some very strange ways of misusing it for ... well, dumb business stuff) I thought it would be smart to try to learn a bit more 'normal' DB usage stuff. At some point I should like to try one of the NoSQL databases like CouchDB for contrast. There is at least a skeleton of a Squeak connection to that (http://wiki.squeak.org/squeak/6153) and it looks like an interesting idea. Has anyone used it recently?
Why can't we commit/save the image with a much higher frequency? Is it because there is other state in the system that should not be stored? Or is the snapshotPrimitive just too slow? I wonder whether ImageSegments could be a solution to that problem.
You could saved the image frequently I guess, though frequently saving a potentially multi-hundred Gb file might annoy your sysadmins. And you really don't want to do that if you aim for a quiet life. Also you have to be careful how you do it; remember, there is a lot of stuff to close, release, compress, clean, before the actual file save. Failing to do that was the cause of perhaps the biggest and most expensive customer issue/debugging project in ParcPlace's history. As in a couple of months of senior management (both customer and PPS) going apeshit about our 'gross incompetence'. That turned out to be some twit making a background process to save the image regularly; without bothering with all that tedious "doing the job right". So images got saved in a very dangerous state and just occasionally that broken image file would be (re)started and ... boom. Given it was a payroll system for a Very Large Company, this was problematic.
ImageSegments might be a good way to save a lot of data. They can be incredibly fast. An ancient example is the old exobox home internet terminal product from the turn of the century (back when having An Internet was an aspirational thing that scared people) where loading the pretty fonts required took a minute or two. Or five. Again, ancient times, people actually used to turn computers off. I think they were scared that the electron would leak out during the night and seduce their children or something. Startup times of 5 minutes were seen as a bit of a problem but all those cute Comic Sans etc fonts were important to looking cool. I was able to save the loaded fonts as an ImageSegment easily enough and even on the pathetic cheap machines we were targeting (2-300MHz intel low-end stuff with 1Mb ram if you were lucky) could then load those fonts in a much less than 1 second.
The big downside is that this is a very 'non-standard' way to save data and businesses are very scared of such things. This is why for practical reasons we need to connect to postgres, oracle, mysql, blah-blah-blah. The good news is that almost anything has a socket interface and so it isn't hard to get started. You should see the insane stuff that was required in Ancient Times; it would make you young folk turn grey. Shudder.
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: AG: Add Gibberish
Tim, I vouch strongly for Kafka. Combine it with Elastic Search for logging and a metaRepository to register / publish, by Type assignment to Kafka Topic :: • Encoders by Topic / Type • Producers by Topic / Type • ConsumerTypes by Topic / Type / Target (DB write, Hadoop, service calls, event bridging to 3rd parties)
And a distributed dashboard to control maps of service instances on remote hosts.
Weather aggregators could scan station Topics for historical analysis.
This is Solid solution. Perhaps over the top in your use case. Best.
••• rabbit ❤️🔥🐰
On Wed, Oct 25, 2023 at 14:20, Tim Rowledge <[tim@rowledge.org](mailto:On Wed, Oct 25, 2023 at 14:20, Tim Rowledge <<a href=)> wrote:
Hi -
On 2023-10-25, at 6:17 AM, christoph.thiede@student.hpi.uni-potsdam.de wrote: So the image itself does not qualify to you as a database due to the lack of atomic transaction semantics (i.e., if the image crashes before it is saved or "committed" with new (MQTT) data, these data won't arrive later again in the system)?
Well, partly. Remember I've been doing Smalltalk for a *very* long time and frequently working on VM development and the low-level image stuff, so I have seen quite a lot of system crashes that would have lost data. Not to mention the occasional actual machine releasing its magic smoke. This tends to make you a bit nervous about only keeping info in an image.
Also my early experience with trying to use databases was... traumatic. I was an IBM research Fellow and working mostly on UIs for 3D design systems but one of the core parts of any solid modeller is the database of all the geometry and so forth. I had to try to make at least some sense of it. IBM's database stuff in the early '80s was a bit, well, confusing. Especially in the research centre where I worked. I did quite like the idea of the binary set-entity database that we had running on an early Transputing Surface (https://en.wikipedia.org/wiki/Meiko_Scientific and good grief I used to know Miles Chesney) and I swear I actually sort of understood the ideas once upon a time.
Then at ParcPlace circa 1994 the objectworks/visualworks product was deemed to require a connection to commercial databases for Business Reasons That Mere Software Engineers Could Not Possibly Understand. That became sometihng of a death-march project and I was very pleased not to be in charge of that one.
So, after working a bit with Levente and his version of a postgresql connection for Squeak (with side steps into some very strange ways of misusing it for ... well, dumb business stuff) I thought it would be smart to try to learn a bit more 'normal' DB usage stuff. At some point I should like to try one of the NoSQL databases like CouchDB for contrast. There is at least a skeleton of a Squeak connection to that (http://wiki.squeak.org/squeak/6153) and it looks like an interesting idea. Has anyone used it recently?
Why can't we commit/save the image with a much higher frequency? Is it because there is other state in the system that should not be stored? Or is the snapshotPrimitive just too slow? I wonder whether ImageSegments could be a solution to that problem.
You could saved the image frequently I guess, though frequently saving a potentially multi-hundred Gb file might annoy your sysadmins. And you really don't want to do that if you aim for a quiet life. Also you have to be careful how you do it; remember, there is a lot of stuff to close, release, compress, clean, before the actual file save. Failing to do that was the cause of perhaps the biggest and most expensive customer issue/debugging project in ParcPlace's history. As in a couple of months of senior management (both customer and PPS) going apeshit about our 'gross incompetence'. That turned out to be some twit making a background process to save the image regularly; without bothering with all that tedious "doing the job right". So images got saved in a very dangerous state and just occasionally that broken image file would be (re)started and ... boom. Given it was a payroll system for a Very Large Company, this was problematic.
ImageSegments might be a good way to save a lot of data. They can be incredibly fast. An ancient example is the old exobox home internet terminal product from the turn of the century (back when having An Internet was an aspirational thing that scared people) where loading the pretty fonts required took a minute or two. Or five. Again, ancient times, people actually used to turn computers off. I think they were scared that the electron would leak out during the night and seduce their children or something. Startup times of 5 minutes were seen as a bit of a problem but all those cute Comic Sans etc fonts were important to looking cool. I was able to save the loaded fonts as an ImageSegment easily enough and even on the pathetic cheap machines we were targeting (2-300MHz intel low-end stuff with 1Mb ram if you were lucky) could then load those fonts in a much less than 1 second.
The big downside is that this is a very 'non-standard' way to save data and businesses are very scared of such things. This is why for practical reasons we need to connect to postgres, oracle, mysql, blah-blah-blah. The good news is that almost anything has a socket interface and so it isn't hard to get started. You should see the insane stuff that was required in Ancient Times; it would make you young folk turn grey. Shudder.
tim
tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: AG: Add Gibberish
••• rabbit ❤️🔥🐰
On Wed, Oct 25, 2023 at 19:19, Michael Engelhart <[mike.engelhart@gmail.com](mailto:On Wed, Oct 25, 2023 at 19:19, Michael Engelhart <<a href=)> wrote:
CouchDB Is quite nice if you need/want a NoSql database. Its replication system is excellent and it uses a REST api so doesn’t even really need a connection library. Basic CRUD operations are simple REST calls to the database.
RabbitDB?
Kafka as I saw recommended here is not for the faint of heart unless you’re just playing with it locally without it being clustered.
yessir, it’s definitely industrial. I’m thinking a Squeak implementation of Log, I believe it’s called. This is the object that persists what? Like by file system block writes or page units of persistence and indexable or something 🤪. High speed FileSystem integration at the metal. The original Kafka team at LinkedIn, I think, were DB folks. My design for my team at Dish, provided a metaRepositorg ahh h allowed Encoders🔩, TopicProducers and TopConsumers to be registered by Topic/Type with marshaling, generation and handling provided through reflection. We were formed agile and we released twice the first half year, once per quarter. ZERO BUGS both times. Bam! Super smooth handoff, as I made my leave.
Kafka/Hadoop Dish Event System. 50 million events / day. In Java.
Here’s a write up on what our team achieved after 7 months.
https://www.wsj.com/articles/BL-CIOB-4454
https://www.wsj.com/articles/BL-CIOB-4454In other words, is industrial level an objective?
🐇
Mike
On Oct 25, 2023, at 2:37 PM, rabbit <rabbit@callistohouse.org> wrote:
Tim, I vouch strongly for Kafka. Combine it with Elastic Search for logging and a metaRepository to register / publish, by Type assignment to Kafka Topic :: • Encoders by Topic / Type • Producers by Topic / Type • ConsumerTypes by Topic / Type / Target (DB write, Hadoop, service calls, event bridging to 3rd parties)
And a distributed dashboard to control maps of service instances on remote hosts.
Weather aggregators could scan station Topics for historical analysis.
This is Solid solution. Perhaps over the top in your use case. Best.
••• rabbit ❤️🔥🐰
On Wed, Oct 25, 2023 at 14:20, Tim Rowledge <[tim@rowledge.org](mailto:On Wed, Oct 25, 2023 at 14:20, Tim Rowledge <<a href=)> wrote:
Hi -
On 2023-10-25, at 6:17 AM, christoph.thiede@student.hpi.uni-potsdam.de wrote: So the image itself does not qualify to you as a database due to the lack of atomic transaction semantics (i.e., if the image crashes before it is saved or "committed" with new (MQTT) data, these data won't arrive later again in the system)?
Well, partly. Remember I've been doing Smalltalk for a *very* long time and frequently working on VM development and the low-level image stuff, so I have seen quite a lot of system crashes that would have lost data. Not to mention the occasional actual machine releasing its magic smoke. This tends to make you a bit nervous about only keeping info in an image.
Also my early experience with trying to use databases was... traumatic. I was an IBM research Fellow and working mostly on UIs for 3D design systems but one of the core parts of any solid modeller is the database of all the geometry and so forth. I had to try to make at least some sense of it. IBM's database stuff in the early '80s was a bit, well, confusing. Especially in the research centre where I worked. I did quite like the idea of the binary set-entity database that we had running on an early Transputing Surface (https://en.wikipedia.org/wiki/Meiko_Scientific and good grief I used to know Miles Chesney) and I swear I actually sort of understood the ideas once upon a time.
Then at ParcPlace circa 1994 the objectworks/visualworks product was deemed to require a connection to commercial databases for Business Reasons That Mere Software Engineers Could Not Possibly Understand. That became sometihng of a death-march project and I was very pleased not to be in charge of that one.
So, after working a bit with Levente and his version of a postgresql connection for Squeak (with side steps into some very strange ways of misusing it for ... well, dumb business stuff) I thought it would be smart to try to learn a bit more 'normal' DB usage stuff. At some point I should like to try one of the NoSQL databases like CouchDB for contrast. There is at least a skeleton of a Squeak connection to that (http://wiki.squeak.org/squeak/6153) and it looks like an interesting idea. Has anyone used it recently?
Why can't we commit/save the image with a much higher frequency? Is it because there is other state in the system that should not be stored? Or is the snapshotPrimitive just too slow? I wonder whether ImageSegments could be a solution to that problem.
You could saved the image frequently I guess, though frequently saving a potentially multi-hundred Gb file might annoy your sysadmins. And you really don't want to do that if you aim for a quiet life. Also you have to be careful how you do it; remember, there is a lot of stuff to close, release, compress, clean, before the actual file save. Failing to do that was the cause of perhaps the biggest and most expensive customer issue/debugging project in ParcPlace's history. As in a couple of months of senior management (both customer and PPS) going apeshit about our 'gross incompetence'. That turned out to be some twit making a background process to save the image regularly; without bothering with all that tedious "doing the job right". So images got saved in a very dangerous state and just occasionally that broken image file would be (re)started and ... boom. Given it was a payroll system for a Very Large Company, this was problematic.
ImageSegments might be a good way to save a lot of data. They can be incredibly fast. An ancient example is the old exobox home internet terminal product from the turn of the century (back when having An Internet was an aspirational thing that scared people) where loading the pretty fonts required took a minute or two. Or five. Again, ancient times, people actually used to turn computers off. I think they were scared that the electron would leak out during the night and seduce their children or something. Startup times of 5 minutes were seen as a bit of a problem but all those cute Comic Sans etc fonts were important to looking cool. I was able to save the loaded fonts as an ImageSegment easily enough and even on the pathetic cheap machines we were targeting (2-300MHz intel low-end stuff with 1Mb ram if you were lucky) could then load those fonts in a much less than 1 second.
The big downside is that this is a very 'non-standard' way to save data and businesses are very scared of such things. This is why for practical reasons we need to connect to postgres, oracle, mysql, blah-blah-blah. The good news is that almost anything has a socket interface and so it isn't hard to get started. You should see the insane stuff that was required in Ancient Times; it would make you young folk turn grey. Shudder.
tim
tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: AG: Add Gibberish
•cough•
I’m thinking a Squeak implementation of Log, to be distributed partitioned replicated slices of paged data, in Squeak. Peer-2-Peer, a la Croquet Net. Add data block transfer api to the Replicator?
SqueakSource, so it cannot be knocked over, with clustering, even were a rack to fall over.
SqueakyCloud.
••• rabbit ❤️🔥🐰
On Thu, Oct 26, 2023 at 15:00, rabbit <[rabbit@callistohouse.org](mailto:On Thu, Oct 26, 2023 at 15:00, rabbit <<a href=)> wrote:
••• rabbit ❤️🔥🐰
On Wed, Oct 25, 2023 at 19:19, Michael Engelhart <[mike.engelhart@gmail.com](mailto:On Wed, Oct 25, 2023 at 19:19, Michael Engelhart <<a href=)> wrote:
CouchDB Is quite nice if you need/want a NoSql database. Its replication system is excellent and it uses a REST api so doesn’t even really need a connection library. Basic CRUD operations are simple REST calls to the database.
RabbitDB?
Kafka as I saw recommended here is not for the faint of heart unless you’re just playing with it locally without it being clustered.
yessir, it’s definitely industrial. I’m thinking a Squeak implementation of Log, I believe it’s called. This is the object that persists what? Like by file system block writes or page units of persistence and indexable or something 🤪. High speed FileSystem integration at the metal. The original Kafka team at LinkedIn, I think, were DB folks. My design for my team at Dish, provided a metaRepositorg ahh h allowed Encoders🔩, TopicProducers and TopConsumers to be registered by Topic/Type with marshaling, generation and handling provided through reflection. We were formed agile and we released twice the first half year, once per quarter. ZERO BUGS both times. Bam! Super smooth handoff, as I made my leave.
Kafka/Hadoop Dish Event System. 50 million events / day. In Java.
Here’s a write up on what our team achieved after 7 months.
https://www.wsj.com/articles/BL-CIOB-4454
https://www.wsj.com/articles/BL-CIOB-4454In other words, is industrial level an objective?
🐇
Mike
On Oct 25, 2023, at 2:37 PM, rabbit <rabbit@callistohouse.org> wrote:
Tim, I vouch strongly for Kafka. Combine it with Elastic Search for logging and a metaRepository to register / publish, by Type assignment to Kafka Topic :: • Encoders by Topic / Type • Producers by Topic / Type • ConsumerTypes by Topic / Type / Target (DB write, Hadoop, service calls, event bridging to 3rd parties)
And a distributed dashboard to control maps of service instances on remote hosts.
Weather aggregators could scan station Topics for historical analysis.
This is Solid solution. Perhaps over the top in your use case. Best.
••• rabbit ❤️🔥🐰
On Wed, Oct 25, 2023 at 14:20, Tim Rowledge <[tim@rowledge.org](mailto:On Wed, Oct 25, 2023 at 14:20, Tim Rowledge <<a href=)> wrote:
Hi -
On 2023-10-25, at 6:17 AM, christoph.thiede@student.hpi.uni-potsdam.de wrote: So the image itself does not qualify to you as a database due to the lack of atomic transaction semantics (i.e., if the image crashes before it is saved or "committed" with new (MQTT) data, these data won't arrive later again in the system)?
Well, partly. Remember I've been doing Smalltalk for a *very* long time and frequently working on VM development and the low-level image stuff, so I have seen quite a lot of system crashes that would have lost data. Not to mention the occasional actual machine releasing its magic smoke. This tends to make you a bit nervous about only keeping info in an image.
Also my early experience with trying to use databases was... traumatic. I was an IBM research Fellow and working mostly on UIs for 3D design systems but one of the core parts of any solid modeller is the database of all the geometry and so forth. I had to try to make at least some sense of it. IBM's database stuff in the early '80s was a bit, well, confusing. Especially in the research centre where I worked. I did quite like the idea of the binary set-entity database that we had running on an early Transputing Surface (https://en.wikipedia.org/wiki/Meiko_Scientific and good grief I used to know Miles Chesney) and I swear I actually sort of understood the ideas once upon a time.
Then at ParcPlace circa 1994 the objectworks/visualworks product was deemed to require a connection to commercial databases for Business Reasons That Mere Software Engineers Could Not Possibly Understand. That became sometihng of a death-march project and I was very pleased not to be in charge of that one.
So, after working a bit with Levente and his version of a postgresql connection for Squeak (with side steps into some very strange ways of misusing it for ... well, dumb business stuff) I thought it would be smart to try to learn a bit more 'normal' DB usage stuff. At some point I should like to try one of the NoSQL databases like CouchDB for contrast. There is at least a skeleton of a Squeak connection to that (http://wiki.squeak.org/squeak/6153) and it looks like an interesting idea. Has anyone used it recently?
Why can't we commit/save the image with a much higher frequency? Is it because there is other state in the system that should not be stored? Or is the snapshotPrimitive just too slow? I wonder whether ImageSegments could be a solution to that problem.
You could saved the image frequently I guess, though frequently saving a potentially multi-hundred Gb file might annoy your sysadmins. And you really don't want to do that if you aim for a quiet life. Also you have to be careful how you do it; remember, there is a lot of stuff to close, release, compress, clean, before the actual file save. Failing to do that was the cause of perhaps the biggest and most expensive customer issue/debugging project in ParcPlace's history. As in a couple of months of senior management (both customer and PPS) going apeshit about our 'gross incompetence'. That turned out to be some twit making a background process to save the image regularly; without bothering with all that tedious "doing the job right". So images got saved in a very dangerous state and just occasionally that broken image file would be (re)started and ... boom. Given it was a payroll system for a Very Large Company, this was problematic.
ImageSegments might be a good way to save a lot of data. They can be incredibly fast. An ancient example is the old exobox home internet terminal product from the turn of the century (back when having An Internet was an aspirational thing that scared people) where loading the pretty fonts required took a minute or two. Or five. Again, ancient times, people actually used to turn computers off. I think they were scared that the electron would leak out during the night and seduce their children or something. Startup times of 5 minutes were seen as a bit of a problem but all those cute Comic Sans etc fonts were important to looking cool. I was able to save the loaded fonts as an ImageSegment easily enough and even on the pathetic cheap machines we were targeting (2-300MHz intel low-end stuff with 1Mb ram if you were lucky) could then load those fonts in a much less than 1 second.
The big downside is that this is a very 'non-standard' way to save data and businesses are very scared of such things. This is why for practical reasons we need to connect to postgres, oracle, mysql, blah-blah-blah. The good news is that almost anything has a socket interface and so it isn't hard to get started. You should see the insane stuff that was required in Ancient Times; it would make you young folk turn grey. Shudder.
tim
tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: AG: Add Gibberish
Here are the Java files for Kafka's LogSegment and friends.
https://www.dropbox.com/scl/fi/d4ai8f2smga4v37h1krxa/kafka-log.zip?rlkey=hz5...
🐰
On 10/26/23 15:06, rabbit wrote:
•cough•
I’m thinking a Squeak implementation of Log, to be distributed partitioned replicated slices of paged data, in Squeak. Peer-2-Peer, a la Croquet Net. Add data block transfer api to the Replicator?
SqueakSource, so it cannot be knocked over, with clustering, even were a rack to fall over.
SqueakyCloud.
••• rabbit ❤️🔥🐰
On Thu, Oct 26, 2023 at 15:00, rabbit <[rabbit@callistohouse.org](mailto:On Thu, Oct 26, 2023 at 15:00, rabbit <<a href=)> wrote:
••• rabbit ❤️🔥🐰
On Wed, Oct 25, 2023 at 19:19, Michael Engelhart <[mike.engelhart@gmail.com](mailto:On Wed, Oct 25, 2023 at 19:19, Michael Engelhart <<a href=)> wrote:
CouchDB Is quite nice if you need/want a NoSql database. Its replication system is excellent and it uses a REST api so doesn’t even really need a connection library. Basic CRUD operations are simple REST calls to the database.
RabbitDB?
Kafka as I saw recommended here is not for the faint of heart unless you’re just playing with it locally without it being clustered.
yessir, it’s definitely industrial. I’m thinking a Squeak implementation of Log, I believe it’s called. This is the object that persists what? Like by file system block writes or page units of persistence and indexable or something 🤪. High speed FileSystem integration at the metal. The original Kafka team at LinkedIn, I think, were DB folks. My design for my team at Dish, provided a metaRepositorg ahh h allowed Encoders🔩, TopicProducers and TopConsumers to be registered by Topic/Type with marshaling, generation and handling provided through reflection. We were formed agile and we released twice the first half year, once per quarter. ZERO BUGS both times. Bam! Super smooth handoff, as I made my leave.
Kafka/Hadoop Dish Event System. 50 million events / day. In Java.
Here’s a write up on what our team achieved after 7 months.
https://www.wsj.com/articles/BL-CIOB-4454
In other words, is industrial level an objective?
🐇
Mike
On Oct 25, 2023, at 2:37 PM, rabbit [<rabbit@callistohouse.org>](mailto:rabbit@callistohouse.org) wrote:
Tim, I vouch strongly for Kafka. Combine it with Elastic Search for logging and a metaRepository to register / publish, by Type assignment to Kafka Topic :: • Encoders by Topic / Type • Producers by Topic / Type • ConsumerTypes by Topic / Type / Target (DB write, Hadoop, service calls, event bridging to 3rd parties)
And a distributed dashboard to control maps of service instances on remote hosts.
Weather aggregators could scan station Topics for historical analysis.
This is Solid solution. Perhaps over the top in your use case. Best.
••• rabbit ❤️🔥🐰
On Wed, Oct 25, 2023 at 14:20, Tim Rowledge <[tim@rowledge.org](mailto:On Wed, Oct 25, 2023 at 14:20, Tim Rowledge <<a href=)> wrote:
Hi -
On 2023-10-25, at 6:17 AM, christoph.thiede@student.hpi.uni-potsdam.de wrote: So the image itself does not qualify to you as a database due to the lack of atomic transaction semantics (i.e., if the image crashes before it is saved or "committed" with new (MQTT) data, these data won't arrive later again in the system)?
Well, partly. Remember I've been doing Smalltalk for a *very* long time and frequently working on VM development and the low-level image stuff, so I have seen quite a lot of system crashes that would have lost data. Not to mention the occasional actual machine releasing its magic smoke. This tends to make you a bit nervous about only keeping info in an image.
Also my early experience with trying to use databases was... traumatic. I was an IBM research Fellow and working mostly on UIs for 3D design systems but one of the core parts of any solid modeller is the database of all the geometry and so forth. I had to try to make at least some sense of it. IBM's database stuff in the early '80s was a bit, well, confusing. Especially in the research centre where I worked. I did quite like the idea of the binary set-entity database that we had running on an early Transputing Surface (https://en.wikipedia.org/wiki/Meiko_Scientific and good grief I used to know Miles Chesney) and I swear I actually sort of understood the ideas once upon a time.
Then at ParcPlace circa 1994 the objectworks/visualworks product was deemed to require a connection to commercial databases for Business Reasons That Mere Software Engineers Could Not Possibly Understand. That became sometihng of a death-march project and I was very pleased not to be in charge of that one.
So, after working a bit with Levente and his version of a postgresql connection for Squeak (with side steps into some very strange ways of misusing it for ... well, dumb business stuff) I thought it would be smart to try to learn a bit more 'normal' DB usage stuff. At some point I should like to try one of the NoSQL databases like CouchDB for contrast. There is at least a skeleton of a Squeak connection to that (http://wiki.squeak.org/squeak/6153) and it looks like an interesting idea. Has anyone used it recently?
Why can't we commit/save the image with a much higher frequency? Is it because there is other state in the system that should not be stored? Or is the snapshotPrimitive just too slow? I wonder whether ImageSegments could be a solution to that problem.
You could saved the image frequently I guess, though frequently saving a potentially multi-hundred Gb file might annoy your sysadmins. And you really don't want to do that if you aim for a quiet life. Also you have to be careful how you do it; remember, there is a lot of stuff to close, release, compress, clean, before the actual file save. Failing to do that was the cause of perhaps the biggest and most expensive customer issue/debugging project in ParcPlace's history. As in a couple of months of senior management (both customer and PPS) going apeshit about our 'gross incompetence'. That turned out to be some twit making a background process to save the image regularly; without bothering with all that tedious "doing the job right". So images got saved in a very dangerous state and just occasionally that broken image file would be (re)started and ... boom. Given it was a payroll system for a Very Large Company, this was problematic.
ImageSegments might be a good way to save a lot of data. They can be incredibly fast. An ancient example is the old exobox home internet terminal product from the turn of the century (back when having An Internet was an aspirational thing that scared people) where loading the pretty fonts required took a minute or two. Or five. Again, ancient times, people actually used to turn computers off. I think they were scared that the electron would leak out during the night and seduce their children or something. Startup times of 5 minutes were seen as a bit of a problem but all those cute Comic Sans etc fonts were important to looking cool. I was able to save the loaded fonts as an ImageSegment easily enough and even on the pathetic cheap machines we were targeting (2-300MHz intel low-end stuff with 1Mb ram if you were lucky) could then load those fonts in a much less than 1 second.
The big downside is that this is a very 'non-standard' way to save data and businesses are very scared of such things. This is why for practical reasons we need to connect to postgres, oracle, mysql, blah-blah-blah. The good news is that almost anything has a socket interface and so it isn't hard to get started. You should see the insane stuff that was required in Ancient Times; it would make you young folk turn grey. Shudder.
tim
tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: AG: Add Gibberish
-- ••• rabbit ❤️🔥🐰
Fascinating ... so what I think we actually need then would be an object-oriented DBMS right in Squeak with automatical grounding to the file system. Heavy blackboxes outside of the image are something that I find scary. :-)
Best, Christoph
--- Sent from Squeak Inbox Talk
On 2023-10-25T11:20:52-07:00, tim@rowledge.org wrote:
Hi -
On 2023-10-25, at 6:17 AM, christoph.thiede(a)student.hpi.uni-potsdam.de wrote: So the image itself does not qualify to you as a database due to the lack of atomic transaction semantics (i.e., if the image crashes before it is saved or "committed" with new (MQTT) data, these data won't arrive later again in the system)?
Well, partly. Remember I've been doing Smalltalk for a *very* long time and frequently working on VM development and the low-level image stuff, so I have seen quite a lot of system crashes that would have lost data. Not to mention the occasional actual machine releasing its magic smoke. This tends to make you a bit nervous about only keeping info in an image.
Also my early experience with trying to use databases was... traumatic. I was an IBM research Fellow and working mostly on UIs for 3D design systems but one of the core parts of any solid modeller is the database of all the geometry and so forth. I had to try to make at least some sense of it. IBM's database stuff in the early '80s was a bit, well, confusing. Especially in the research centre where I worked. I did quite like the idea of the binary set-entity database that we had running on an early Transputing Surface (https://en.wikipedia.org/wiki/Meiko_Scientific and good grief I used to know Miles Chesney) and I swear I actually sort of understood the ideas once upon a time.
Then at ParcPlace circa 1994 the objectworks/visualworks product was deemed to require a connection to commercial databases for Business Reasons That Mere Software Engineers Could Not Possibly Understand. That became sometihng of a death-march project and I was very pleased not to be in charge of that one.
So, after working a bit with Levente and his version of a postgresql connection for Squeak (with side steps into some very strange ways of misusing it for ... well, dumb business stuff) I thought it would be smart to try to learn a bit more 'normal' DB usage stuff. At some point I should like to try one of the NoSQL databases like CouchDB for contrast. There is at least a skeleton of a Squeak connection to that (http://wiki.squeak.org/squeak/6153) and it looks like an interesting idea. Has anyone used it recently?
Why can't we commit/save the image with a much higher frequency? Is it because there is other state in the system that should not be stored? Or is the snapshotPrimitive just too slow? I wonder whether ImageSegments could be a solution to that problem.
You could saved the image frequently I guess, though frequently saving a potentially multi-hundred Gb file might annoy your sysadmins. And you really don't want to do that if you aim for a quiet life. Also you have to be careful how you do it; remember, there is a lot of stuff to close, release, compress, clean, before the actual file save. Failing to do that was the cause of perhaps the biggest and most expensive customer issue/debugging project in ParcPlace's history. As in a couple of months of senior management (both customer and PPS) going apeshit about our 'gross incompetence'. That turned out to be some twit making a background process to save the image regularly; without bothering with all that tedious "doing the job right". So images got saved in a very dangerous state and just occasionally that broken image file would be (re)started and ... boom. Given it was a payroll system for a Very Large Company, this was problematic.
ImageSegments might be a good way to save a lot of data. They can be incredibly fast. An ancient example is the old exobox home internet terminal product from the turn of the century (back when having An Internet was an aspirational thing that scared people) where loading the pretty fonts required took a minute or two. Or five. Again, ancient times, people actually used to turn computers off. I think they were scared that the electron would leak out during the night and seduce their children or something. Startup times of 5 minutes were seen as a bit of a problem but all those cute Comic Sans etc fonts were important to looking cool. I was able to save the loaded fonts as an ImageSegment easily enough and even on the pathetic cheap machines we were targeting (2-300MHz intel low-end stuff with 1Mb ram if you were lucky) could then load those fonts in a much less than 1 second.
The big downside is that this is a very 'non-standard' way to save data and businesses are very scared of such things. This is why for practical reasons we need to connect to postgres, oracle, mysql, blah-blah-blah. The good news is that almost anything has a socket interface and so it isn't hard to get started. You should see the insane stuff that was required in Ancient Times; it would make you young folk turn grey. Shudder.
tim
tim Rowledge; tim(a)rowledge.org; http://www.rowledge.org/tim Strange OpCodes: AG: Add Gibberish
Fascinating ... so what I think we actually need then would be an object-oriented DBMS right in Squeak with automatical grounding to the file system.
Squeak has had that for at least 15 years.
Oh, of course, sorry! I tend to forget that because I cannot use it in Squeak Trunk. So what other reasons are there to use Postgres instead of Magma? I'm curious. :-)
Best, Christoph
________________________________ From: Chris Muller asqueaker@gmail.com Sent: Thursday, October 26, 2023 11:01:08 PM To: The general-purpose Squeak developers list squeak-dev@lists.squeakfoundation.org Subject: [squeak-dev] Re: MQTT, PostgreSQL, PlotMorphs, and the WeatherStation project
Fascinating ... so what I think we actually need then would be an object-oriented DBMS right in Squeak with automatical grounding to the file system.
Squeak has had that for at least 15 years.
The usual, and often final, reason would be something along the lines of "the customer wants it". Maybe they already use postgresql/DB2/couchDB/oracle/whateverDB. Maybe they have sysadmins in charge of all this that refuse to countenance a "non-standard bit of nonsense, why can't we use sometihng proper like php".
In my case, I had been using the postgres stuff, had a suitable db set up and wanted to learn a bit more about the system. It would be interesting to reqrite the db connection to try magma, and couchDB, and mongoDb etc etc just to see what it turns out like. Volunteers?
On 2023-10-26, at 5:02 PM, Thiede, Christoph Christoph.Thiede@student.hpi.uni-potsdam.de wrote:
Oh, of course, sorry! I tend to forget that because I cannot use it in Squeak Trunk. So what other reasons are there to use Postgres instead of Magma? I'm curious. :-)
Best, Christoph
From: Chris Muller asqueaker@gmail.com Sent: Thursday, October 26, 2023 11:01:08 PM To: The general-purpose Squeak developers list squeak-dev@lists.squeakfoundation.org Subject: [squeak-dev] Re: MQTT, PostgreSQL, PlotMorphs, and the WeatherStation project
Fascinating ... so what I think we actually need then would be an object-oriented DBMS right in Squeak with automatical grounding to the file system.
Squeak has had that for at least 15 years.
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: BPP: Branch Pretty Please
On 2023-10-26, at 11:43 AM, christoph.thiede@student.hpi.uni-potsdam.de wrote:
Fascinating ... so what I think we actually need then would be an object-oriented DBMS right in Squeak with automatical grounding to the file system. Heavy blackboxes outside of the image are something that I find scary. :-)
As Chris says, Magma is a squeak option, and for corporate overlord purposes the net API stuff for postrgesql etc. works.
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim "Wibble" said Pooh the stress beginning to show.
squeak-dev@lists.squeakfoundation.org