You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by Apache Wiki <wi...@apache.org> on 2011/04/07 11:17:08 UTC
[Hadoop Wiki] Update of "PoweredBy" by prosch
Dear Wiki user,
You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.
The "PoweredBy" page has been changed by prosch.
http://wiki.apache.org/hadoop/PoweredBy?action=diff&rev1=273&rev2=274
--------------------------------------------------
* Our clusters vary from 10 to 500 nodes
* Hypertable is also supported by Baidu
- * [[http://www.beebler.com|Beebler]]
+ * [[http://www.beebler.com|Beebler]] [[http://technischeuebersetzung.mojblog.hr/|Blog]]
* 14 node cluster (each node has: 2 dual core CPUs, 2TB storage, 8GB RAM)
* We use hadoop for matching dating profiles
@@ -511, +511 @@
= U =
* [[http://glud.udistrital.edu.co|Universidad Distrital Francisco Jose de Caldas (Grupo GICOGE/Grupo Linux UD GLUD/Grupo GIGA]]
- . 5 node low-profile cluster. We use Hadoop to support the research project: Territorial Intelligence System of Bogota City.
+ . 5 node low-profile cluster. We use Hadoop to support the research project: Territorial Intelligence System of Bogota City.[[http://prosch.blog.de/
+ |Übersetzung]]
* [[http://ir.dcs.gla.ac.uk/terrier/|University of Glasgow - Terrier Team]]
* 30 nodes cluster (Xeon Quad Core 2.4GHz, 4GB RAM, 1TB/node storage).
@@ -543, +544 @@
* [[http://www.web-alliance.fr|Web Alliance]]
* We use Hadoop for our internal search engine optimization (SEO) tools. It allows us to store, index, search data in a much faster way.
* We also use it for logs analysis and trends prediction.
- * [[http://www.worldlingo.com/|WorldLingo]]
+ * [[http://www.worldlingo.com/|WorldLingo]] [[http://uebersetzer1.wordpress.com/|Wordpress]]
* Hardware: 44 servers (each server has: 2 dual core CPUs, 2TB storage, 8GB RAM)
* Each server runs Xen with one Hadoop/HBase instance and another instance with web or application servers, giving us 88 usable virtual machines.
* We run two separate Hadoop/HBase clusters with 22 nodes each.
@@ -562, +563 @@
* >60% of Hadoop Jobs within Yahoo are Pig jobs.
= Z =
- * [[http://www.zvents.com/|Zvents]]
+ * [[http://www.zvents.com/|Zvents]]
* 10 node cluster (Dual-Core AMD Opteron 2210, 4GB RAM, 1TB/node storage)
* Run Naive Bayes classifiers in parallel over crawl data to discover event information