Discussion:
architecture type question
(too old to reply)
t***@gmail.com
2014-03-31 14:49:56 UTC
Permalink
hello all

there is a push by developers to do the following on our bread-and-butter OLTP system :

it seems like this is a nice solution. Let me see if I can outline it:

1. Applications read-write to the operational database. When they delete a record and per all the cascades baked into the design (either at database level or hibernate level) other rows will also get deleted
2. Replication will take the delete as is and delete corresponding records in the data warehouse staging database.
3. Delete triggers are defined in the data warehouse staging database to copy the record into a corresponding "deleted records" table with a deleted flag set to true.
4. ETL could work on a view that combines the two tables in the data warehouse staging and will never know that records were actually deleted.

So from the application's stand point it deleted the records and the transaction is not lengthened because there is not trigger on the table that the original delete was done on.

The thinking is that the operational database will only store 30 days of records and so will perform much faster (no details have been given on the performance details being aimed at)

I think this type of design is more prevalent in the warehousing world but have not encountered it in the OLTP universe.....thoughts, comments, suggestions?

thanks,
tom
Jack Parker
2014-03-31 14:56:43 UTC
Permalink
Delete on an OLTP? Depends on the volume, you might want to use a fragmentation strategy to drop old data.
I am a little surprised that a delete is to be carried forward to a data warehouse, typically a data warehouse does not remove data.

j.
Post by t***@gmail.com
hello all
1. Applications read-write to the operational database. When they delete a record and per all the cascades baked into the design (either at database level or hibernate level) other rows will also get deleted
2. Replication will take the delete as is and delete corresponding records in the data warehouse staging database.
3. Delete triggers are defined in the data warehouse staging database to copy the record into a corresponding "deleted records" table with a deleted flag set to true.
4. ETL could work on a view that combines the two tables in the data warehouse staging and will never know that records were actually deleted.
So from the application's stand point it deleted the records and the transaction is not lengthened because there is not trigger on the table that the original delete was done on.
The thinking is that the operational database will only store 30 days of records and so will perform much faster (no details have been given on the performance details being aimed at)
I think this type of design is more prevalent in the warehousing world but have not encountered it in the OLTP universe.....thoughts, comments, suggestions?
thanks,
tom
_______________________________________________
Informix-list mailing list
http://www.iiug.org/mailman/listinfo/informix-list
Loading...