You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by Apache Wiki <wi...@apache.org> on 2013/07/04 18:57:42 UTC

[Hadoop Wiki] Update of "Hbase/PoweredBy" by ermanpattuk

Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "Hbase/PoweredBy" page has been changed by ermanpattuk:
https://wiki.apache.org/hadoop/Hbase/PoweredBy?action=diff&rev1=82&rev2=83

Comment:
Added BigSecret to the poweredBy list

  [[http://www.adobe.com|Adobe]] - We currently have about 30 nodes running HDFS, Hadoop and HBase  in clusters ranging from 5 to 14 nodes on both production and development. We plan a deployment on an 80 nodes cluster. We are using HBase in several areas from social services to structured data and processing for internal use. We constantly write data to HBase and run mapreduce jobs to process then store it back to HBase or external systems. Our production cluster has been running since Oct 2008.
  
  [[http://www.benipaltechnologies.com|Benipal Technologies]] - We have a 35 node cluster used for HBase and Mapreduce with Lucene / SOLR and katta integration to create and finetune our search databases. Currently, our HBase installation has over 10 Billion rows with 100s of datapoints per row. We compute over 10¹⁸ calculations daily using MapReduce directly on HBase. We heart HBase. 
+ 
+ [[https://github.com/ermanpattuk/BigSecret|BigSecret]] - is a security framework that is designed to secure Key-Value data, while preserving efficient processing capabilities. It achieves cell-level security, using combinations of different cryptographic techniques, in an efficient and secure manner. It provides a wrapper library around HBase.
  
  [[http://caree.rs|Caree.rs]] - Accelerated hiring platform for HiTech companies. We use HBase and Hadoop for all aspects of our backend - job and company data storage, analytics processing, machine learning algorithms for our hire recommendation engine. Our live production site is directly served from HBase. We use cascading for running offline data processing jobs.