Scalability

From Ripple Wiki
Jump to: navigation, search

Ripple is designed to be very efficient and scale to handle high transaction volume. However, Ripple runs in the real world and real-world systems have various limits. The systems that process Ripple transactions, validators, have resource limits: communications bandwidth, cpu processing power, memory limits, storage limits, storage transactions per second, and so on. Based on these limits a validator may or may not be able to participate in the validation process.

Validators which can not keep up with the network transaction volume bow out of the consensus process. When they do this, the validation network shrinks and the Ripple network as a whole becomes slightly less secure. With fewer validators, the possibility that many validators could be under control of the same malicious entity goes up. If the network were to shrink to too small a size, the network could become insecure. Also, if the load was too high for the last remaining node, there would be no network at all.

As a safety, Ripple servers will refuse to process transactions or report results to clients if there are insufficient validations. While disabling use of the network will protect people from relying on incorrect information, what we really want is to allow the network to continue to operate reliably, not just know that it is unreliable.

To do this, the Ripple network actively protects itself from the set of validators shrinking to too small. Validators increase the transaction fee they charge when they are under load. This temporarily increased fee ensures that only the most valuable transactions are processed. Less valuable transactions can wait till the load reduceds before they are processed.

Transaction submtters indicate their relative value of their transactions by increasing the fee the transaction fee they pay. Validators under load include the fee they currently require in their validations. This allows transaction submitters to have an estimate what fee is required for successful transactions. This strategy reduces the overall load and lets validators that would otherwise bow out remain processing. This in turn makes sure that the network does not shrink to a size that is too small to provide security.

When validators bow out, they do not have to completely bow out. They can observe that other validating nodes on their UNL are still validating. They then publish a "partial validation" that indicates that they observed the network validation process but could not participate. These partial validations allow other validators to be able to distinguish between validators bowing out due to load and a network split. In the case of a network split, anyone in the minority must suspend using the network until connectivity is restored.

Ripple transactions are designed to be relatively small, it is very easy for many validators to have excellent Internet bandwidth such that the scaling described above should allow many transactions to be communicated before the network has to raise fees. Also, the overall Internet bandwidth is rapidly increasing.

For many transactions, the most expensive operation is validating the signature on the transaction. Ripple servers under common administration can form "Ripple clusters" that distribute the work of verifying signature. If an organization using Ripple decides to run, say, one validator, two client servers for their own use, and one public client server, they can put them in a cluster so that a transaction or validation does not have to be checked by all five servers.

This leaves the major potential bottleneck the process of actually applying the transactions -- determining what ledger changes each transaction makes. The transaction processing can be done in parallel on multiple cores and multiple machines. As CPUs are cheap, this is not a large problem. The application of transactions by the transaction engine is very efficient. The ledger is designed to stay small due the reserve system. If the ledger were growing too large, the reserve fee could be increased to encourage a small ledger.

In addition, portions of the ledger that are not changed do not require any processing, so a large ledger does not automatically mean that transactions take more effort to process.

Commodity hardware with many GB of RAM is readily available. Together these features allows the ledger to be kept in RAM. Flash capacities are rising as well and keeping a ledger on an SSD would be significantly faster than a conventional hard drive.

Additionally, the transaction engine is specifically designed to run very efficiently on a single processor. In the future, there might be custom ASIC for handle even more transactions per second. By the time there are so many transactions to handle, it will be very affordable for these ASICS to be produced and distributed for free by parties interested in keep the network working. If needed, Ripple Labs could likely raise the funds necessary even today.

Personal tools
Namespaces

Variants
Actions
Navigation
Tools