Master-Slave
The master-slave is a database architecture divided into a master database and slave databases. The slave database serves as the backup for the master database. The master database is used for the write operations, while read operations may be spread on multiple slave databases.
Peer-to-Peer
Peer-to-peer replication provides a scale-out and high-availability solution by maintaining copies of data across multiple server instances, also referred to as nodes. Built on the foundation of transactional replication, peer-to-peer replication propagates transactionally consistent changes in near real-time
Synchronous
Synchronous replication ensures that when a disk IO operation is initiated by an application or the file system cache on the primary server, the system waits for IO acknowledgments from both the local disk and the secondary server before acknowledging the operation to the application or file system cache. This mechanism guarantees data consistency and is crucial for the failover of transactional applications when transactions are committed.
Asynchronous
Asynchronous replication involves placing input/output (IO) operations in a queue on the primary server. The primary server does not wait for acknowledgments from the secondary server before proceeding. Consequently, any data that has not been successfully copied to the secondary server before a failure of the primary server is lost.
Data Consistency
Consistency refers to the correctness & presence of the most recently updated data at any given moment in the database. The idea of having all-time access to purely consistent data is at the core of every relational database, it helps in maintaining the data integrity & accuracy as well.
Replication Topologies
This topology is the route of replication data (message) transfer from server to server over the network. It involves a source location pushing data to a target location, whereas the target also acts as a source distributing the data out to multiple target locations. This is typically used to replicate data into a data warehouse and building individual data marts.
Log Shipping
Log shipping is a method used to maintain a standby copy of a database, typically for disaster recovery purposes. It involves automatically copying and restoring transaction logs from a primary database to one or more secondary databases, often located on different servers. By keeping the secondary databases in sync with the primary, log shipping provides a reliable failover solution in case of primary database failure.
Comments
Post a Comment