You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by Apache Wiki <wi...@apache.org> on 2010/02/25 19:49:09 UTC

[Hadoop Wiki] Update of "PoweredBy" by voyager

Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "PoweredBy" page has been changed by voyager.
http://wiki.apache.org/hadoop/PoweredBy?action=diff&rev1=178&rev2=179

--------------------------------------------------

   * [[http://www.weblab.infosci.cornell.edu/|Cornell University Web Lab]]
    * Generating web graphs on 100 nodes (dual 2.4GHz Xeon Processor, 2 GB RAM, 72GB Hard Drive)
  
- 
- 
- 
   * [[http://www.deepdyve.com|Deepdyve]]
    * Elastic cluster with 5-80 nodes
    * We use hadoop to create our indexes of deep web content and to provide a high availability and high bandwidth storage service for index shards for our search cluster.
@@ -103, +100 @@

    * We use Hadoop to filter and index our listings, removing exact duplicates and grouping similar ones.
    * We plan to use Pig very shortly to produce statistics.
  
- 
   * [[http://blog.espol.edu.ec/hadoop/|ESPOL University (Escuela Superior Politécnica del Litoral) in Guayaquil, Ecuador]]
    * 4 nodes proof-of-concept cluster.
    * We use Hadoop in a Data-Intensive Computing capstone course. The course projects cover topics like information retrieval, machine learning, social network analysis, business intelligence, and network security.
@@ -117, +113 @@

    * Facial similarity and recognition across large datasets.
    * Image content based advertising and auto-tagging for social media.
    * Image based video copyright protection.
- 
  
   * [[http://www.facebook.com/|Facebook]]
    * We use Hadoop to store copies of internal log and dimension data sources and use it as a source for reporting/analytics and machine learning.
@@ -141, +136 @@

   * [[http://www.google.com|Google]]
    * [[http://www.google.com/intl/en/press/pressrel/20071008_ibm_univ.html|University Initiative to Address Internet-Scale Computing Challenges]]
  
- 
- 
   * [[http://www.gruter.com|Gruter. Corp.]]
    * 30 machine cluster  (4 cores, 1TB~2TB/machine storage)
    * storage for blog data and web documents
@@ -226, +219 @@

  
   * [[http://www.lotame.com|Lotame]]
    * Using Hadoop and Hbase for storage, log analysis, and pattern discovery/analysis.
+ 
+ 
+  * [[http://www.makara.com//|Makara]]
+   * Using ZooKeeper on 2 node cluster on VMware workstation, Amazon EC2, Zen
+   * Using zkpython
+   * Looking into expanding into 100 node cluster
+ 
  
   * [[http://www.crmcs.com//|MicroCode]]
    * 18 node cluster (Quad-Core Intel Xeon, 1TB/node storage)