You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Vincent <vi...@datashaping.com> on 2014/05/29 23:26:38 UTC
Hadoop Automation: Eliminate Data Bottlenecks
Join us on June 10th, 2014 at 9am PDT for our latest Data Science Central
Webinar Event: Hadoop Automation: Eliminate Data Bottlenecks
Link: http://bit.ly/RH7rvG
The TRACE3 Big Data Intelligence Team has developed a program which
incorporates the participation and partnership of StackIQ and Cisco to
automate the deployment and management of Hadoop clusters and overcome
cluster management challenges.
With StackIQ Cluster Manager and Ciscos UCS Common Platform Architecture
(CPA) the management challenges of Hadoop clusters inefficiencies and
risks associated with disparate tools and manual scripting are eliminated.
Big Data sets and Hadoop Clusters are integrated into a unified,
fabric-based architecture optimized for Big Data workloads
You will learn how the partnership of Trace3, Cisco and StackIQ:
- Accelerate infrastructure deployment for fast, easy growth
- Deliver Big Data / Hadoop infrastructure in a small, efficient footprint
- Handle the most complex workloads with Hadoop clusters supported by
hundreds of servers and petabytes of storage
- Quickly load terabytes of data into the system over a high-bandwidth
unified fabric
- Quickly gain Customer Insight and Business Intelligence to enable faster
innovation
- Automate your Big Data / Hadoop process to deliver reliable,
cost-effective Decision Science
Speakers:
- Sean McKeown, Technology Solutions Architect, Cisco
- Nick Durkin, Trace3 Architect, TRACE3
- Tim McIntire, CEO and Co-Founder, StackIQ
Hosted by: Tim Matteson, Cofounder, Data Science Central
Title: Hadoop Automation: Eliminate Data Bottlenecks
Date: Tuesday, June 10th, 2014
Time: 9:00 AM - 10:00 AM PDT
Register at http://bit.ly/RH7rvG
After registering you will receive a confirmation email containing
information about joining the Webinar.