[Seaside] Hybrid persistent idea. Was: Seaside & Ruby Rails Compared

Avi Bryant avi.bryant at gmail.com
Fri Sep 9 06:47:56 CEST 2005

On Sep 7, 2005, at 5:53 AM, Dmitry Dorofeev wrote:
> Common example is list of car sale adverts.
> Search by model, year, price, location etc. Up to 20 options.
> Dealing with 20 indexes in OODBMS manually is quite boring, at  
> least in OmniBase. Doing
> searches manually combining OR/AND/NOT logic with 20 indexes is  
> also boring, but
> probably with good API it is the same as SQL, or can be similar in  
> some way.
> And i recall my expirience with MySQL where reachability by  
> navigation was pain in the ...
> for complex DB (assuming slow JOIN implementation), but very  
> pleasant search against a table
> of records. So i think that it is a good idea to store my data as  
> objects in say OmniBase
> and have plain tables consisting of only necessery fields in say  
> PostgreSQL. So PostgreSQL or MySQL
> will take care about indexes and searches returning me ObjectIDs,  
> so i can fetch real objects from
> OmniBase. Still some extra work here, but i feel quite comfortable  
> with the idea.

That's a cool idea.  I've found that in practice, you often end up  
with a system of indices in an OODB that's pretty separate from the  
object graph anyway - so why not offload them to something that has  
highly optimized index operations?  One potential problem I can see  
is that you'd lose transactional semantics: if, during a single  
update, the OmniBase transaction goes through but the PostgreSQL one  
doesn't, they'll be at least temporarily inconsistent.  If you're  
careful about error handling, however, you can probably get away with  
it in practice.

There's some nice potential synergy with ROE here, which keeps  
relational queries as lazy virtual collections until you try to  
iterate over them.  You'd just need to apply one further lazy  
transformation (#asObjects?) which mapped the OID column to actual  
objects pulled out of the OODB.  So you'd stay in the ROE world of  
building up an SQL parse tree, refining it with whatever #select: and  
#intersection: and #copyFrom:to: filtering you needed, up until the  
moment that you absolutely needed the objects themselves.

I'd be curious to see what the performance was actually like on a  
system like this - does the latency of having to deal with two  
separate databases in sequence, and converting between various forms  
of OID and query representation, slow things down noticeably?


More information about the Seaside mailing list