You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by Aleksandr Zuravliov <al...@visual-meta.com> on 2015/07/28 10:48:04 UTC

Phoenix Scalability

Hi all!

We are looking for a solution to replace MySQL as a single point of failure
in our DB layer.
Being a non-redundant component it causes us downtimes in cases of hardware
failure and during upgrades.

We are an e-commerce company maintaining around 100 mln. items in more than
10 countries. That makes ~300 Gb of meta data per country. To cope with the
volumes we use custom made sharding on MySQL therefore we need a solution
providing high write and read performance. MySQL cluster does not suit us
as it uses to transfer between nodes high volumes of data. Our
infrastructure is made of around 100 nodes. We use Hadoop, non-relational
information is stored in Hbase.

   - I would like to know if anybody here is using Phoenix for similar
   scale? What issues are you facing during regular maintenance and during the
   peak load periods?
   - What were the scaling impediments you had to overcome? Or do you scale
   up by simply adding nodes?
   - Do you have downtimes? What are the causes?
   - What about failover? Do you experience data losses?
   - Do the maintenance operations - like backups at runtime affect
   performance?
   - How is upgrading the systems organized? Is it seamless for the cluster?
   - Are you able to reclaim the freed space (to compact the files used by
   the DB)? Without shutting down a node?
   - What about performance during multiple write/read operations? Any
   locking issues?


Pretty much need to get a feeling how painful it is to handle such volumes
on Phoenix.

Thank you!

All the best,
Aleksandr