Virtual Block Store
While there are many block virtualization technologies in use today that deliver thin provisioning capabilities, few provide the fine-grained (4 KB) virtualization necessary to deliver data deduplication efficiency, performance, and scalability in an affordable footprint. Deduplication was an afterthought in most of today’s products and not part of the core design. As a result, many of today’s solutions require the use of large block sizes or high memory footprints. When utilized with deduplication, these compromises impact performance, scalability and/or data reduction rates.
Permabit’s Virtual Block Store has been designed from the ground up with not just thin provisioning, but also deduplication and compression in mind. The Block Store technology is utilized by both Permabit’s Albireo VDO and Albireo SANblox™ products to address the diverse requirements of Storage OEMs, ODMs, Cloud Service Providers, and Software-Defined Storage vendors.
When compared to other solutions in use today, the Virtual Block Store offers the:
How It Works
The Virtual Block Store leverages four key innovations to offer thin provisioning, data deduplication and compression with high performance, massive scalability and extreme resource efficiency:
Lock-free concurrency – The Virtual Block Store leverages a lock-free design, which allows Permabit to deliver the highest possible performance in an environment where there are many threads of execution, maximizing CPU utilization to ensure parallelism of operations. The underlying mechanism allows Permabit to fine-tune performance to the capabilities of any underlying hardware platform.
Portability and isolation – The technology is designed to be used in kernel or user space based deployments making it applicable to the broadest possible range of applications. This has the added benefit of making the code easier to test. Thousands of unit tests can be used to ensure correctness of the product.
Write amortization – To maximize performance, the block store is able to amortize the overhead of many requests to reduce I/O overhead. As part of this design, reference counts are managed explicitly, eliminating the need for garbage collection.
Shortcut processing – To eliminate performance bottlenecks, unneeded operations in the data path are skipped whenever possible.