You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Tsz Wo Nicholas Sze (JIRA)" <ji...@apache.org> on 2014/06/21 00:40:28 UTC

[jira] [Created] (HDFS-6584) Support archival storage

Tsz Wo Nicholas Sze created HDFS-6584:
-----------------------------------------

             Summary: Support archival storage
                 Key: HDFS-6584
                 URL: https://issues.apache.org/jira/browse/HDFS-6584
             Project: Hadoop HDFS
          Issue Type: New Feature
          Components: datanode, namenode
            Reporter: Tsz Wo Nicholas Sze
            Assignee: Tsz Wo Nicholas Sze


In most of the Hadoop clusters, as more and more data is stored for longer time, the demand for storage is outstripping the compute. Hadoop needs a cost effective and easy to manage solution to meet this demand for storage. Current solution is:
- Delete the old unused data. This comes at operational cost of identifying unnecessary data and deleting them manually.
- Add more nodes to the clusters. This adds along with storage capacity unnecessary compute capacity to the cluster.

Hadoop needs a solution to decouple growing storage capacity from compute capacity. Nodes with higher density and less expensive storage with low compute power are becoming available and can be used as cold storage in the clusters. Based on policy the data from hot storage can be moved to cold storage. Adding more nodes to the cold storage can grow the storage independent of the compute capacity in the cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)