During the last years I’ve been experimenting with GlusterFS and his functionalities as distributed object store; a lot has changed in the software, overall since Red Hat acquired it. I have been using it and find it useful for many projects but not for others: what I love is the community oriented approach with a very responsive team and support for any kind of users (meaning from the 2 nodes web server to a RAID10 Infiniband cluster for high end storage).
My personal story with Gluster starts with a porting of a on-premise architecture in the cloud: moving an existing application to the cloud, instead of redesigning it from scratch, involves a lot of engineering to adapt the current system settings to a scalable infrastructure. Gluster comes handy when talking about scaling: the latest milestone has a very simple and efficient way of reconfiguring the underlying hardware, adding and removing nodes in the storage pool is as simple as inputting a couple of commands from any of the peers in the cluster.
If you’re unfamiliar with Gluster concepts (storage pool, peers, etc…) I suggest you RTFM on Gluster’s website; in this post I will detail a few points you won’t find on documentation and you should definetely know before starting to evaluate Gluster adoption.
If you think that data redundancy mechanisms built in Gluster (replication and georeplication) are substitutes for backups you’re doing it wrong: in Gluster there is no way of recovering data present only in failed drives or unavailable portions of the pool. There is no SPOF free implementation that will avoid you regular backups, unless you can tollerate loss of data.
Gluster has been developed to “take common hardware and turn it into scalable high performance storage solution”. Gluster is great when availability and durability are performance indicators because it has been thinked for horizontal scalability, but scaling vertically in system resources will not have the desired outcome. There are phisycal thresholds in a node’s configuration that make huge hardware resources useless (eg, limit to the number of CPU threads for the transaltor).
If you are uncertain about Gluster capabilities, try it out yourself installing the software on at least two virtual machines and test if your application works well with the native FUSE module. As storage layer for I/O intensive applications Gluster is useful when average file size is bigger than the minimum size of the read cache (4MB). Currently the Gluster community is discussing how (relatively) small files should be handled in the next major release of milestone 3 (release 3.7) scheduled for the end of April 2015, but for now if your application scenario has lots of small files written frequently, Gluster may not be the right chioce.
If you find Gluster is not suitable for your application, consider analizying a different solution like DRBD: it may not be as cutting edge as Gluster or Ceph but may be the right solution for the job.