Previous papers have postulated that traditional schemes for the management
of replicated data are doomed to failure in practice due to a quartic (or
worse) explosion in the probability of deadlocks. In this paper,
we present results of a simulation study for three recently introduced
protocols that guarantee global serializability and transaction atomicity
without resorting to the two-phase commit protocol. The protocols
analyzed in this paper include a global locking protocol (by Gray, et al.,
Sigmod97), a ``pessimistic'' protocol based on a replication graph (by
Breitbart and Korth, PODS97), and an ``optimistic'' protocol based on a
replication graph. The results of the study show a wide range of practical
applicability for the lazy replica-update approach employed in these protocols.
We show that under reasonable contention conditions and sufficiently high
transaction rate, both replication-graph-based protocols outperform the
global locking protocol. The distinctions among the protocols in terms
of performance are significant. For example, an offered load where
70% - 80% of transactions under the global locking protocol were aborted,
only 10% of transactions were aborted under the protocols based on the
replication graph. The results of the study suggest that protocols
based on a replication graph offer practical techniques for replica management.
However, it also shows that performance deteriorates rapidly and dramatically
when transaction throughput reaches a saturation point.