You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by Apache Wiki <wi...@apache.org> on 2011/04/07 20:12:07 UTC

PoweredBy reverted to revision 273 on Hadoop Wiki

Dear wiki user,

You have subscribed to a wiki page "Hadoop Wiki" for change notification.

The page PoweredBy has been reverted to revision 273 by DougCutting.
The comment on this change is: spam removal.
http://wiki.apache.org/hadoop/PoweredBy?action=diff&rev1=274&rev2=275

--------------------------------------------------

    * Our clusters vary from 10 to 500 nodes
    * Hypertable is also supported by Baidu
  
-  * [[http://www.beebler.com|Beebler]] [[http://technischeuebersetzung.mojblog.hr/|Blog]]
+  * [[http://www.beebler.com|Beebler]]
    * 14 node cluster (each node has: 2 dual core CPUs, 2TB storage, 8GB RAM)
    * We use hadoop for matching dating profiles
  
@@ -511, +511 @@

  
  = U =
   * [[http://glud.udistrital.edu.co|Universidad Distrital Francisco Jose de Caldas (Grupo GICOGE/Grupo Linux UD GLUD/Grupo GIGA]]
-   . 5 node low-profile cluster. We use Hadoop to support the research project: Territorial Intelligence System of Bogota City.[[http://prosch.blog.de/
+   . 5 node low-profile cluster. We use Hadoop to support the research project: Territorial Intelligence System of Bogota City.
- |Übersetzung]]
  
   * [[http://ir.dcs.gla.ac.uk/terrier/|University of Glasgow - Terrier Team]]
    * 30 nodes cluster (Xeon Quad Core 2.4GHz, 4GB RAM, 1TB/node storage).
@@ -544, +543 @@

   * [[http://www.web-alliance.fr|Web Alliance]]
    * We use Hadoop for our internal search engine optimization (SEO) tools. It allows us to store, index, search data in a much faster way.
    * We also use it for logs analysis and trends prediction.
-  * [[http://www.worldlingo.com/|WorldLingo]] [[http://uebersetzer1.wordpress.com/|Wordpress]]
+  * [[http://www.worldlingo.com/|WorldLingo]]
    * Hardware: 44 servers (each server has: 2 dual core CPUs, 2TB storage, 8GB RAM)
    * Each server runs Xen with one Hadoop/HBase instance and another instance with web or application servers, giving us 88 usable virtual machines.
    * We run two separate Hadoop/HBase clusters with 22 nodes each.
@@ -563, +562 @@

    * >60% of Hadoop Jobs within Yahoo are Pig jobs.
  
  = Z =
-  * [[http://www.zvents.com/|Zvents]] 
+  * [[http://www.zvents.com/|Zvents]]
    * 10 node cluster (Dual-Core AMD Opteron 2210, 4GB RAM, 1TB/node storage)
    * Run Naive Bayes classifiers in parallel over crawl data to discover event information