At the fifth Storage Field Day, which was held on April 23 2014, PernixData gave the world a sneak preview into the next version of their flagship product FVP.
Today FVP already gives us pooled flash resources with write-through and write-back support. Because of the write-back support PernixData made sure the the FVP platform is very resilient and reliable (after all, you don’t want to loose data). Given the fact 75% of accelerated production machines run in write-back mode, proves FVP can be trusted with your data.
The announcement of yesterday will take things a bit further. Here are the new features:
– Support of all storage protocols (NFS, iSCSI, FC, Local disks)
– Compression of replication network traffic
– Replication Groups
– Write-through and write-back to RAM
The support of all storage protocols is very nice. It’s now possible for anyone to use this product so everybody can enjoy accelerated virtual machines. And believe me, once you have experienced it, you don’t want to go back. The neat thing is that FVP supports all protocols in a transparent way. There is no virtual appliance involved, you don’t have to reconfigure you NFS storage or networking. Just install FVP, create a flash cluster and add the datastore to it. The datastore will “look and feel” the same but will be magically faster.
Now, let’s jump to the RAM caching right away. I’ll get back to the other two new features in a minute. FVP offers read and write caching to local server RAM. I don’t think I have to explain what this will do to the storage speed your VMs experience. So let’s dig into some details right away.
First of all, you can use any amount of RAM on any server. There is no minimum and you even don’t have to use the same amount on each server. It can also be expanded on the fly. The maximum amount of RAM that can be used as storage cache is 1TB per server.
Now I can hear you thinking “Sounds cool, it’s probably insanely fast, but what happens when a host fails? Or a whole datacenter?”. A valid concern. After all, RAM is volatile, flash is not. So flash seems the obvious choice to use as storage cache, if you don’t want to loose any data. But remember, FVP has built-in protection against host failures. You can configure synchronous replica’s in a flash cluster. That means data is always available on another host if one host fails. And that means it doesn’t matter if the data on the original host was persisted to disk or was in volatile RAM: as long if it’s still available somewhere, you’re good.
When you start using RAM as a write cache and replicate this to another RAM caching host, network latency is going to be a major factor. To optimize the network traffic between replication partners PernixData introduces replication traffic compression. This will even make usage of 1Gbit networks when you are using flash caching. I guess if you use RAM caching you still want to use 10Gbit. That makes me wonder, wouldn’t RDMA over Infiniband be the lowest latency solution for that? @PernixData: Any plans to implement this?
But what happens in case of a power failure in your datacenter? Then all hosts will go down and all data in RAM will be lost, right? Wrong! This is where the “replication groups” feature comes into play. You can now group your servers and FVP will write a replica to each group. So if you have a stretched cluster you will create a group for each datacenter. If one datacenter fails, your data is still available in the other one, your machines will failover and everybody will stay happy. And accelerated. In RAM. Which means FAST. VERY FAST.