Friday, March 30, 2012

Partial database copy - kind of replication

Hi,
I have to update regularly (twice a day) an SQLServer 2000 DB on a remote
location, what has only a 128 k line for Internet (even that in not working
always). The source DB is on SQLSeerver 2k too. Only some tables from the
given DB are important, but I can not make any structural change (column
addition) on the tables. The full database is pretty big (> 500 MBytes), so
I can not make a snapshot replication.
My first idea was to create a second DB, where I would make a shadow table
for every original table: record by record I would store the primary key of
the original record, and a kind of "CRC" from the rest of the data. When I
have to send the update, I would compare the stored CRC and a newly
generated one, and send only the records, where this CRC is not match. (I
hope it is understandable)
My questions are:
1. What simplier solutions you have
2. If none, what algorithm should I use for the "CRC", what insure that I
will recognise the change of the record? The tables holds all kind of
fields, even binary .
Thanks:
Peter
Peter,
Did you look into merge replication? It seems to me that it might be a good
answer to your problem. You can choose what tables to replicate, and it will
be sending only changes to the remote server. If you need you can also apply
filters to your tables, you can filter rows as well as columns.
HTH, Igor
"Peter Baranyi" <bp@.cdfwebb.hu> wrote in message
news:uTSFEnfFFHA.4052@.TK2MSFTNGP14.phx.gbl...
> Hi,
> I have to update regularly (twice a day) an SQLServer 2000 DB on a remote
> location, what has only a 128 k line for Internet (even that in not
> working always). The source DB is on SQLSeerver 2k too. Only some tables
> from the given DB are important, but I can not make any structural change
> (column addition) on the tables. The full database is pretty big (> 500
> MBytes), so I can not make a snapshot replication.
> My first idea was to create a second DB, where I would make a shadow table
> for every original table: record by record I would store the primary key
> of the original record, and a kind of "CRC" from the rest of the data.
> When I have to send the update, I would compare the stored CRC and a newly
> generated one, and send only the records, where this CRC is not match. (I
> hope it is understandable)
> My questions are:
> 1. What simplier solutions you have
> 2. If none, what algorithm should I use for the "CRC", what insure that I
> will recognise the change of the record? The tables holds all kind of
> fields, even binary .
> Thanks:
> Peter
>
|||I would use transactional replication for this.
If you want to do some sort of CRC comparison you should look at checksum.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Peter Baranyi" <bp@.cdfwebb.hu> wrote in message
news:uTSFEnfFFHA.4052@.TK2MSFTNGP14.phx.gbl...
> Hi,
> I have to update regularly (twice a day) an SQLServer 2000 DB on a remote
> location, what has only a 128 k line for Internet (even that in not
working
> always). The source DB is on SQLSeerver 2k too. Only some tables from the
> given DB are important, but I can not make any structural change (column
> addition) on the tables. The full database is pretty big (> 500 MBytes),
so
> I can not make a snapshot replication.
> My first idea was to create a second DB, where I would make a shadow table
> for every original table: record by record I would store the primary key
of
> the original record, and a kind of "CRC" from the rest of the data. When I
> have to send the update, I would compare the stored CRC and a newly
> generated one, and send only the records, where this CRC is not match. (I
> hope it is understandable)
> My questions are:
> 1. What simplier solutions you have
> 2. If none, what algorithm should I use for the "CRC", what insure that I
> will recognise the change of the record? The tables holds all kind of
> fields, even binary .
> Thanks:
> Peter
>
|||Unless the dataflow is in both directionns, and involves
autonomy or conflicts, I would recommend transactional
over merge, as it is significvantly faster and less
resource-intensive.
If you want a non-replication version, then
binary_checksums can be used although there is some
discussion as to whether these can always be relied upon
100%. Redgate do a DataCompare tool to do this type of
thing also.
Rgds,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

No comments:

Post a Comment