tx-logging vs. redundancy for databases

Martin Drautzburg martin.drautzburg at web.de
Fri May 14 07:08:03 UTC 2004

Chris Muller <afunkyobject at yahoo.com> writes:

> > When the memory content was lost, Oracle replays the redo log entires
> > that have a larger system change number (SCN) than the database
> > files. This is done in two phases: first ALL changes are applied (roll
> > forward) and then transactions that lack a commit are undone (roll
> > back) in the usual way using rollback segments..
> When memory was lost, why does it bother to apply any of the changes without a
> commit in the first place?  Couldn't it just skip them entirely and only apply
> those that were written wholly?

I believe this is because you need the before-images stored in the
rollback segments. If a sequence of changes is logged but the commit
is not logged, you cannot simply ignore these changes, but you have to
roll back in order to restore the before-images. You could possibly do
this by looking ahead in the redo log and implement some smart
algorithm, but in any cases it is more than simply ignoring them. It
is probably easiest to simply use the regular mechanism to replay the

Also bear in mind that writing redo-logs in not *that* expensive,
because they are written sequentially when the redo log buffer is
flushed and no random seeks are required.

Another interesing topic is hot backups. In that case oracle freezes
the database files and *only* writes to the redo-logs (they are the
real thing). If oracle has to flush blocks from the buffer cache
during a hot backup it flushes these blocks to the redo log and not to
the database file. In a way it logs the flushing of the buffer cache
during a hot backup.

More information about the Squeak-dev mailing list