You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@chukwa.apache.org by ey...@apache.org on 2016/11/12 22:00:05 UTC

[1/2] chukwa git commit: CHUKWA-810. Update code documentation with Apache trademark. (Eric Yang)

Repository: chukwa
Updated Branches:
  refs/heads/master 3a356ddbb -> fcfcc088b


CHUKWA-810.  Update code documentation with Apache trademark. (Eric Yang)


Project: http://git-wip-us.apache.org/repos/asf/chukwa/repo
Commit: http://git-wip-us.apache.org/repos/asf/chukwa/commit/20be5ae5
Tree: http://git-wip-us.apache.org/repos/asf/chukwa/tree/20be5ae5
Diff: http://git-wip-us.apache.org/repos/asf/chukwa/diff/20be5ae5

Branch: refs/heads/master
Commit: 20be5ae512207de60ef57d596ec0c0fe30d00743
Parents: 3a356dd
Author: Eric Yang <ey...@apache.org>
Authored: Sat Oct 8 10:30:47 2016 -0700
Committer: Eric Yang <ey...@apache.org>
Committed: Sat Oct 8 10:30:47 2016 -0700

----------------------------------------------------------------------
 src/site/apt/Quick_Start_Guide.apt.vm  |  44 +++++++-------
 src/site/apt/agent.apt                 |  20 +++----
 src/site/apt/async_ack.apt             |   8 +--
 src/site/apt/dataflow.apt              |   6 +-
 src/site/apt/datamodel.apt             |   6 +-
 src/site/apt/design.apt                |  20 +++----
 src/site/apt/index.apt                 |  32 +++++------
 src/site/apt/pipeline.apt              |  12 ++--
 src/site/apt/programming.apt           |  18 +++---
 src/site/apt/releasenotes.apt.vm       |  20 ++++---
 src/site/apt/user.apt.vm               |  86 ++++++++++++++--------------
 src/site/resources/images/asf_logo.png | Bin 0 -> 21243 bytes
 src/site/site.xml                      |   9 ++-
 13 files changed, 144 insertions(+), 137 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/apt/Quick_Start_Guide.apt.vm
----------------------------------------------------------------------
diff --git a/src/site/apt/Quick_Start_Guide.apt.vm b/src/site/apt/Quick_Start_Guide.apt.vm
index 210ddde..8a5a888 100644
--- a/src/site/apt/Quick_Start_Guide.apt.vm
+++ b/src/site/apt/Quick_Start_Guide.apt.vm
@@ -13,55 +13,55 @@
 ~~ See the License for the specific language governing permissions and
 ~~ limitations under the License.
 ~~
-Chukwa Quick Start Guide
+Apache Chukwa Quick Start Guide
 
 Purpose
 
-  Chukwa is a system for large-scale reliable log collection and processing with Hadoop. The Chukwa design overview discusses the overall architecture of Chukwa. You should read that document before this one. The purpose of this document is to help you install and configure Chukwa.
+  Apache Chukwa is a system for large-scale reliable log collection and processing with Hadoop. Apache Chukwa design overview discusses the overall architecture of Apache Chukwa. You should read that document before this one. The purpose of this document is to help you install and configure Apache Chukwa.
 
 
 Pre-requisites
 
-  Chukwa should work on any POSIX platform, but GNU/Linux is the only production platform that has been tested extensively. Chukwa has also been used successfully on Mac OS X, which several members of the Chukwa team use for development.
+  Apache Chukwa should work on any POSIX platform, but GNU/Linux is the only production platform that has been tested extensively. Apache Chukwa has also been used successfully on Mac OS X, which several members of the Apache Chukwa team use for development.
 
   Software requirements are Java 1.6 or better, ZooKeeper {{${zookeeperVersion}}}, HBase {{${hbaseVersion}}} and Hadoop {{${hadoopVersion}}}.
 
-  The Chukwa cluster management scripts rely on ssh; these scripts, however, are not required if you have some alternate mechanism for starting and stopping daemons.
+  Apache Chukwa cluster management scripts rely on ssh; these scripts, however, are not required if you have some alternate mechanism for starting and stopping daemons.
 
 
-Installing Chukwa
+Installing Apache Chukwa
 
-  A minimal Chukwa deployment has five components:
+  A minimal Apache Chukwa deployment has five components:
 
-  * A Hadoop and HBase cluster on which Chukwa will process data (referred to as the Chukwa cluster). 
+  * A Apache Hadoop and Apache HBase cluster on which Apache Chukwa will process data (referred to as the Chukwa cluster). 
   
-  * One or more agent processes, that send monitoring data to HBase. The nodes with active agent processes are referred to as the monitored source nodes.
+  * One or more agent processes, that send monitoring data to Apache HBase. The nodes with active agent processes are referred to as the monitored source nodes.
   
-  * Solr Cloud cluster which Chukwa will store indexed log files.
+  * Solr Cloud cluster which Apache Chukwa will store indexed log files.
 
   * Data analytics script, summarize Hadoop Cluster Health.
 
-  * HICC, the Chukwa visualization tool.
+  * HICC, Apache Chukwa visualization tool.
 
 []
 
-[./images/chukwa_architecture.png] Chukwa ${VERSION} Architecture 
+[./images/chukwa_architecture.png] Apache Chukwa ${VERSION} Architecture 
 
 First Steps
 
-  * Obtain a copy of Chukwa. You can find the latest release on the Chukwa {{{http://www.apache.org/dyn/closer.cgi/chukwa/}release page}} (or alternatively check the source code out from SCM).
+  * Obtain a copy of Apache Chukwa. You can find the latest release on the Apache Chukwa {{{http://www.apache.org/dyn/closer.cgi/chukwa/}release page}} (or alternatively check the source code out from SCM).
 
   * Un-tar the release, via tar xzf.
 
-  * Make sure a copy of Chukwa is available on each node being monitored.
+  * Make sure a copy of Apache Chukwa is available on each node being monitored.
 
-  * We refer to the directory containing Chukwa as CHUKWA_HOME. It may be useful to set CHUKWA_HOME explicitly in your environment for ease of use.
+  * We refer to the directory containing Apache Chukwa as CHUKWA_HOME. It may be useful to set CHUKWA_HOME explicitly in your environment for ease of use.
 
-Setting Up Chukwa Cluster
+Setting Up Apache Chukwa Cluster
 
 * Configure Hadoop and HBase
 
-  [[1]] Copy Chukwa files to Hadoop and HBase directories:
+  [[1]] Copy Apache Chukwa files to Hadoop and HBase directories:
 
 ---
 cp $CHUKWA_HOME/etc/chukwa/hadoop-log4j.properties $HADOOP_CONF_DIR/log4j.properties
@@ -74,7 +74,7 @@ cp $CHUKWA_HOME/share/chukwa/chukwa-${VERSION}-client.jar $HBASE_HOME/lib
 cp $CHUKWA_HOME/share/chukwa/lib/json-simple-${json-simpleVersion}.jar $HBASE_HOME/lib
 ---  
 
-  [[2]] Restart your Hadoop Cluster. General Hadoop configuration is available at: {{{http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html}Hadoop Configuration}}. <<N.B.>> You may see some additional logging messages at this stage which looks as if error(s) are present. These messages are showing up because the log4j socket appender writes to stderr for warn messages when it is unable to stream logs to a log4j socket server. If the Chukwa agent is started with socket adaptors prior to Hadoop and HBase, those messages will not show up. For the time being do not worry about these messages, they will disappear once Chukwa agent is started with socket adaptors.
+  [[2]] Restart your Hadoop Cluster. General Hadoop configuration is available at: {{{http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html}Hadoop Configuration}}. <<N.B.>> You may see some additional logging messages at this stage which looks as if error(s) are present. These messages are showing up because the log4j socket appender writes to stderr for warn messages when it is unable to stream logs to a log4j socket server. If Apache Chukwa agent is started with socket adaptors prior to Hadoop and HBase, those messages will not show up. For the time being do not worry about these messages, they will disappear once Apache Chukwa agent is started with socket adaptors.
   
   [[3]] Make sure HBase is started. General HBASE configuration is available at: {{{http://hbase.apache.org/book.html#configuration}HBase Configuration}}
   
@@ -84,9 +84,9 @@ cp $CHUKWA_HOME/share/chukwa/lib/json-simple-${json-simpleVersion}.jar $HBASE_HO
 bin/hbase shell < $CHUKWA_HOME/etc/chukwa/hbase.schema
 ---
 
-  This procedure initializes the default Chukwa HBase schema.
+  This procedure initializes the default Apache Chukwa HBase schema.
 
-* Configuring And Starting Chukwa Agent
+* Configuring And Starting Apache Chukwa Agent
 
   [[1]] Edit CHUKWA_HOME/etc/chukwa/chukwa-env.sh. Make sure that JAVA_HOME, HADOOP_CONF_DIR, and HBASE_CONF_DIR are set correctly.
 
@@ -100,7 +100,7 @@ sbin/chukwa-daemon.sh start agent
 
 * Setup Solr to index Service log files
 
-  [[1]] Start Solr ${solrVersion} with Chukwa Solr configuration:
+  [[1]] Start Solr ${solrVersion} with Apache Chukwa Solr configuration:
 
 ---
 bin/solr start -cloud -z localhost:2181
@@ -109,7 +109,7 @@ bin/solr start -cloud -z localhost:2181
 
 * Start HICC
 
-  The Hadoop Infrastructure Care Center (HICC) is the Chukwa web user interface. 
+  The Hadoop Infrastructure Care Center (HICC) is Apache Chukwa web user interface. 
 
   [[1]] To start HICC, do the following:
 
@@ -127,4 +127,4 @@ http://<server>:4080/hicc/
   
   [[2]] The default user name and password is "admin" without quotes.
   
-  [[3]] Metrics data collected by Chukwa Agent will be browsable through Graph Explorer widget.
+  [[3]] Metrics data collected by Apache Chukwa Agent will be browsable through Graph Explorer widget.

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/apt/agent.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/agent.apt b/src/site/apt/agent.apt
index e002f25..b874145 100644
--- a/src/site/apt/agent.apt
+++ b/src/site/apt/agent.apt
@@ -16,7 +16,7 @@
 
 Agent Configuration Guide
 
-  In a normal Chukwa installation, an <Agent> process runs on every 
+  In a normal Apache Chukwa installation, an <Agent> process runs on every 
 machine being monitored. This process is responsible for all the data collection
 on that host.  Data collection might mean periodically running a Unix command,
 or tailing a file, or listening for incoming UDP packets.
@@ -28,8 +28,8 @@ watched or for each Unix command being executed. Each adaptor has a unique name.
 If you do not specify a name, one will be auto-generated by hashing the 
 Adaptor type and parameters.
 
-  There are a number of Adaptors built into Chukwa, and you can also develop
-your own. Chukwa will use them if you add them to the Chukwa library search 
+  There are a number of Adaptors built into Apache Chukwa, and you can also develop
+your own. Apache Chukwa will use them if you add them to Apache Chukwa library search 
 path (e.g., by putting them in a jarfile in <$CHUKWA_HOME/lib>.)
 
 Agent Control
@@ -75,17 +75,17 @@ the adaptor parameters.
 
   The adaptor name, if specified, should go after the add command, and be 
 followed with an equals sign. It should be a string of printable characters, 
-without whitespace or '='.  Chukwa Adaptor names all start with "adaptor_".
+without whitespace or '='.  Apache Chukwa Adaptor names all start with "adaptor_".
 If you specify an adaptor name which does not start with that prefix, it will
 be added automatically.  
 
-  Adaptor parameters aren't required by the Chukwa agent, but each class of 
+  Adaptor parameters aren't required by Apache Chukwa agent, but each class of 
 adaptor may itself specify both mandatory and optional parameters. See below.
 
 Configuration options
 
-  Chukwa agents are configured via the file <conf/chukwa-agent-conf.xml.>
-Chukwa control port runs on port 9093 by default.
+  Apache Chukwa agents are configured via the file <conf/chukwa-agent-conf.xml.>
+Apache Chukwa control port runs on port 9093 by default.
 
 ---
   <property>
@@ -95,7 +95,7 @@ Chukwa control port runs on port 9093 by default.
   </property>
 ---
 
-  Chukwa agent working directory:
+  Apache Chukwa agent working directory:
 
 ---
   <property>
@@ -150,12 +150,12 @@ add filetailer.FileTailingAdaptor BarData /foo/bar 0
   * <<filetailer.CharFileTailingAdaptorUTF8NewLineEscaped>>
      The same, except that chunks are guaranteed to end only at 
      non-escaped carriage returns. This is useful for pushing 
-     Chukwa-formatted log files, where exception
+     Apache Chukwa-formatted log files, where exception
      stack traces stay in a single chunk.
 
   * <<filetailer.FileTailingAdaptorPreserveLines>>
 	Similar to CharFileTailingAdaptorUTF8. The difference with the latter is
-	mainly seen in the Demux Chukwa process: CharFileTailingAdaptorUTF8 will process
+	mainly seen in the Demux process: CharFileTailingAdaptorUTF8 will process
 	every line one by one whereas FileTailingAdaptorPreserveLines will process
 	all the lines of a same Chunk in a same go which makes the Demux jobs faster.
 	Same parameters and usage as the above.

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/apt/async_ack.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/async_ack.apt b/src/site/apt/async_ack.apt
index a570d2a..a4d330b 100644
--- a/src/site/apt/async_ack.apt
+++ b/src/site/apt/async_ack.apt
@@ -16,13 +16,13 @@
 
 Overview
 
-  Chukwa supports two different reliability strategies.
+  Apache Chukwa supports two different reliability strategies.
 The first, default strategy, is as follows: collectors write data to HDFS, and
 as soon as the HDFS write call returns success, report success to the agent, 
 which advances its checkpoint state.
 
   This is potentially a problem if HDFS (or some other storage tier) has 
-non-durable or asynchronous writes. As a result, Chukwa offers a mechanism, 
+non-durable or asynchronous writes. As a result, Apache Chukwa offers a mechanism, 
 asynchronous acknowledgement, for coping with this case.
 
   This mechanism can be enabled by setting option <httpConnector.asyncAcks>.
@@ -36,7 +36,7 @@ answer questions about the state of the filesystem.
 Theory
 
   In this approach, rather than try to build a fault tolerant collector, 
-Chukwa agents look <<through>> the collectors to the underlying state of the 
+Apache Chukwa agents look <<through>> the collectors to the underlying state of the 
 filesystem. This filesystem state is what is used to detect and recover from 
 failure. Recovery is handled entirely by the agent, without requiring anything 
 at all from the failed collector.
@@ -71,7 +71,7 @@ the period between collector file rotations.
   The solution is end-to-end. Authoritative copies of data can only exist in 
 two places: the nodes where data was originally produced, and the HDFS file 
 system where it will ultimately be stored. Collectors only hold soft state;  
-the only ``hard'' state stored by Chukwa is the agent checkpoints. Below is a 
+the only ``hard'' state stored by Apache Chukwa is the agent checkpoints. Below is a 
 diagram of the flow of messages in this protocol.
 
 Configuration

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/apt/dataflow.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/dataflow.apt b/src/site/apt/dataflow.apt
index cec5189..8eabea2 100644
--- a/src/site/apt/dataflow.apt
+++ b/src/site/apt/dataflow.apt
@@ -18,11 +18,11 @@ HDFS Storage Layout
 
 Overview
 
-  This document describes how Chukwa data is stored in HDFS and the processes that act on it.
+  This document describes how Apache Chukwa data is stored in HDFS and the processes that act on it.
 
 HDFS File System Structure
 
-  The general layout of the Chukwa filesystem is as follows.
+  The general layout of Apache Chukwa filesystem is as follows.
 
 ---
 /chukwa/
@@ -39,7 +39,7 @@ HDFS File System Structure
 
 Raw Log Collection and Aggregation Workflow
 
-  What data is stored where is best described by stepping through the Chukwa workflow.
+  What data is stored where is best described by stepping through Apache Chukwa workflow.
 
   [[1]] Agents write chunks to <logs/*.chukwa> files until a 64MB chunk size is reached or a given time interval has passed.
 

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/apt/datamodel.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/datamodel.apt b/src/site/apt/datamodel.apt
index c0dd928..041a74b 100644
--- a/src/site/apt/datamodel.apt
+++ b/src/site/apt/datamodel.apt
@@ -16,7 +16,7 @@
 
 Data Model
 
-  Chukwa Adaptors emit data in <Chunks>. A Chunk is a sequence of bytes,
+  Apache Chukwa Adaptors emit data in <Chunks>. A Chunk is a sequence of bytes,
 with some metadata. Several of these are set automatically by the Agent or 
 Adaptors. Two of them require user intervention: <cluster name> and 
 <datatype>.  Cluster name is specified in <conf/chukwa-agent-conf.xml>,
@@ -69,7 +69,7 @@ HBase Schema
 
   Row key is composed of 14 bytes data.  First 2 bytes are day of the year.
 The next 6 bytes are md5 signature of metrics name.  The last 6 bytes are
-md5 signature of data source.  This arrangement helps Chukwa to partition
+md5 signature of data source.  This arrangement helps Apache Chukwa to partition
 data evenly across regions base on time.
 
   This arrangement provides a good condensed store for data of the same day
@@ -77,7 +77,7 @@ for the same source.
 
 ** Column Family
 
-  The column family format for Chukwa table are:
+  The column family format for Apache Chukwa table are:
 
 *---------------*-----------------------------------------------------------------:
 | Column Family | Description                                                     |

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/apt/design.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/design.apt b/src/site/apt/design.apt
index 4122fe6..50a7ea1 100644
--- a/src/site/apt/design.apt
+++ b/src/site/apt/design.apt
@@ -16,15 +16,15 @@
 
 Introduction
 
-  Chukwa aims to provide a flexible and powerful platform for distributed
+  Apache Chukwa aims to provide a flexible and powerful platform for distributed
 data collection and rapid data processing. Our goal is to produce a system
 that's usable today, but that can be modified to take advantage of newer
 storage technologies (HDFS appends, HBase, etc) as they mature. In order
-to maintain this flexibility, Chukwa is structured as a pipeline of
+to maintain this flexibility, Apache Chukwa is structured as a pipeline of
 collection and processing stages, with clean and narrow interfaces between
 stages. This will facilitate future innovation without breaking existing code.
 
-Chukwa has five primary components:
+Apache Chukwa has five primary components:
 
   * <<Adaptors>> that collect data from various data source.
 
@@ -37,7 +37,7 @@ Chukwa has five primary components:
   * <<HICC>>, the Hadoop Infrastructure Care Center; a web-portal
     style interface for displaying data.
   	  
-  Below is a figure showing the Chukwa data pipeline, annotated with data
+  Below is a figure showing Apache Chukwa data pipeline, annotated with data
 dwell times at each stage. A more detailed figure is available at the end
 of this document.
 
@@ -45,14 +45,14 @@ of this document.
 	  
 Agents and Adaptors
 
-  Chukwa agents do not collect some particular fixed set of data. Rather, they
+  Apache Chukwa agents do not collect some particular fixed set of data. Rather, they
 support dynamically starting and stopping <Adaptors>, which small
 dynamically-controllable modules that run inside the Agent process and are
 responsible for the actual collection of data.
 
   These dynamically controllable data sources are called 
 adaptors, since they generally are wrapping some other data source, 
-such as a file or a Unix command-line tool.  The Chukwa 
+such as a file or a Unix command-line tool.  Apache Chukwa 
 {{{./agent.html}agent guide}} includes an up-to-date list of available Adaptors.
 
   Data sources need to be dynamically controllable because the particular data
@@ -64,7 +64,7 @@ metrics on an NFS server.
 
 ETL Processes
 
-  Chukwa Agents can write data directly to HBase or sequence files. 
+  Apache Chukwa Agents can write data directly to HBase or sequence files. 
 This is convenient for rapidly getting data committed to stable storage. 
 
   HBase provides index by primary key, and manage data compaction.  It is
@@ -73,7 +73,7 @@ reports.
 
   HDFS provides better throughput for working with large volume of data.  
 It is more suitable for one time research analysis job .  But it's less 
-convenient for finding particular data items. As a result, Chukwa has a 
+convenient for finding particular data items. As a result, Apache Chukwa has a 
 toolbox of MapReduce jobs for organizing and processing incoming data. 
 		
   These jobs come in two kinds: <Archiving> and <Demux>.
@@ -107,9 +107,9 @@ which in turn is populated by collector or data analytic scripts
 that runs on the collected data, after Demux. The  
 {{{./admin.html}Administration guide}} has details on setting up HICC.
 
-HBase Integration
+Apache HBase Integration
 
-  Chukwa has adopted to use HBase to ensure data arrival in milli-seconds and
+  Apache Chukwa has adopted to use HBase to ensure data arrival in milli-seconds and
 also make data available to down steam application at the same time.  This
 will enable monitoring application to have near realtime view as soon as
 data are arriving in the system.  The file rolling, archiving are replaced

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/apt/index.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/index.apt b/src/site/apt/index.apt
index 86453ea..5dcbe94 100644
--- a/src/site/apt/index.apt
+++ b/src/site/apt/index.apt
@@ -16,40 +16,40 @@
 Overview
 
   Log processing was one of the original purposes of MapReduce. Unfortunately,
-using Hadoop MapReduce to monitor Hadoop can be inefficient.  Batch
-processing nature of Hadoop MapReduce prevents the system to provide real time
+using Apache Hadoop MapReduce to monitor Apache Hadoop can be inefficient.  Batch
+processing nature of Apache Hadoop MapReduce prevents the system to provide real time
 status of the cluster.
 
-  We started this journey at beginning of 2008, and a lot of Hadoop components
+  We started this journey at beginning of 2008, and a lot of Apache Hadoop components
 have been built to improve overall reliability of the system and 
 improve realtimeness of monitoring. We have adopted HBase to facilitate lower 
 latency of random reads and using in memory updates and write ahead logs to 
 improve the reliability for root cause analysis.
 
-  Logs are generated incrementally across many machines, but Hadoop MapReduce
+  Logs are generated incrementally across many machines, but Apache Hadoop MapReduce
 works best on a small number of large files. Merging the reduced output
 of multiple runs may require additional mapreduce jobs.  This creates some 
-overhead for data management on Hadoop.
+overhead for data management on Apache Hadoop.
 
-  Chukwa is a Hadoop subproject devoted to bridging that gap between logs
-processing and Hadoop ecosystem.  Chukwa is a scalable distributed monitoring 
-and analysis system, particularly logs from Hadoop and other distributed systems.
+  Apache Chukwa is a Apache Hadoop subproject devoted to bridging that gap between logs
+processing and Hadoop ecosystem.  Apache Chukwa is a scalable distributed monitoring 
+and analysis system, particularly logs from Apache Hadoop and other distributed systems.
 
-  The Chukwa Documentation provides the information you need to get
-started using Chukwa. {{{./design.html} Architecture and Design document}}
-provides high level view of Chukwa design.
+  Apache Chukwa Documentation provides the information you need to get
+started using Apache Chukwa. {{{./design.html} Architecture and Design document}}
+provides high level view of Apache Chukwa design.
 
-  If you're trying to set up a Chukwa cluster from scratch, 
+  If you're trying to set up a Apache Chukwa cluster from scratch, 
 {{{./user.html} User Guide}} describes the setup and deploy procedure.
 
-  If you want to configure the Chukwa agent process, to control what's
+  If you want to configure Apache Chukwa agent process, to control what's
 collected, you should read the {{{./agent.html} Agent Guide}}. There is
 also a  {{{./pipeline.html} Pipeline Guide}} describing configuration
 parameters for ETL processes for the data pipeline.
      
-  And if you want to develop Chukwa to monitor other data source,
+  And if you want to develop Apache Chukwa to monitor other data source,
 {{{./programming.html} Programming Guide}} maybe handy to learn
-about Chukwa programming API.
+about Apache Chukwa programming API.
 
   If you have more questions, you can ask on the
-{{{mailto:user@chukwa.apache.org}Chukwa mailing lists}}
+{{{mailto:user@chukwa.apache.org}Apache Chukwa mailing lists}}

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/apt/pipeline.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/pipeline.apt b/src/site/apt/pipeline.apt
index 1b4dd1c..1443991 100644
--- a/src/site/apt/pipeline.apt
+++ b/src/site/apt/pipeline.apt
@@ -18,7 +18,7 @@ Pipeline Configuration Guide
 
 Basic Options
 
-  Chukwa pipeline are responsible for accepting incoming data from Agents,
+  Apache Chukwa pipeline are responsible for accepting incoming data from Agents,
 and extract, transform and load data to destination storage.  Most commonly, 
 pipeline simply write all received to HBase or HDFS.  
 
@@ -36,7 +36,7 @@ be configured in <chukwa-agent-conf.xml>.
 
   In this mode, HBase configuration is configured in <chukwa-env.sh>.
 HBASE_CONF_DIR should reference to HBae configuration directory to enable
-Chukwa agent to load <hbase-site.xml> from class path.
+Apache Chukwa agent to load <hbase-site.xml> from class path.
 
 * HDFS
 
@@ -59,8 +59,8 @@ pipeline.
 Advanced Options
 
   There are some advanced options, not necessarily documented in the
-agent conf file, that are helpful in using Chukwa in nonstandard ways.
-While normally Chukwa writes sequence files to HDFS, it's possible to
+agent conf file, that are helpful in using Apache Chukwa in nonstandard ways.
+While normally Apache Chukwa writes sequence files to HDFS, it's possible to
 specify an alternate pipe class. The option <chukwa.pipeline> specifies 
 a Java class to instantiate and use as a writer. See the <ChukwaWriter> 
 javadoc for details.
@@ -69,7 +69,7 @@ javadoc for details.
 lets you string together a series of <PipelineableWriters>
 for pre-processing or post-processing incoming data.
 As an example, the SocketTeeWriter class allows other programs to get 
-incoming chunks fed to them over a socket by Chukwa agent.
+incoming chunks fed to them over a socket by Apache Chukwa agent.
 	  	
   Stages in the pipeline should be listed, comma-separated, in option 
 <chukwa.pipeline>
@@ -184,7 +184,7 @@ SocketTeeWriter
 
   The <SocketTeeWriter> allows external processes to watch
 the stream of chunks passing through the agent. This allows certain kinds
-of real-time monitoring to be done on-top of Chukwa.
+of real-time monitoring to be done on-top of Apache Chukwa.
 	  	
   SocketTeeWriter listens on a port (specified by conf option
 <chukwaCollector.tee.port>, defaulting to 9094.)  Applications

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/apt/programming.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/programming.apt b/src/site/apt/programming.apt
index b0d1fee..1ba9760 100644
--- a/src/site/apt/programming.apt
+++ b/src/site/apt/programming.apt
@@ -14,30 +14,30 @@
 ~~ limitations under the License.
 ~~
 
-Chukwa Programmers Guide
+Apache Chukwa Programmers Guide
 
-  At the core of Chukwa is a flexible system for collecting and processing
+  At the core of Apache Chukwa is a flexible system for collecting and processing
 monitoring data, particularly log files. This document describes how to use the
-collected data.  (For an overview of the Chukwa data model and collection 
+collected data.  (For an overview of Apache Chukwa data model and collection 
 pipeline, see the {{{./design.html}Design Guide}}.)  
 
-  In particular, this document discusses the Chukwa archive file formats, the
-demux and archiving mapreduce jobs, and  the layout of the Chukwa storage directories.
+  In particular, this document discusses the Apache Chukwa archive file formats, the
+demux and archiving mapreduce jobs, and  the layout of Apache Chukwa storage directories.
 
 Agent REST API
 
-  Chukwa Agent offers programmable API to control Agent adaptors for collecting data from
+  Apache Chukwa Agent offers programmable API to control Agent adaptors for collecting data from
 remote sources, or setup a listening port for incoming data stream.  Usage guide and
 examples are documented in {{{./apidocs/agent-rest.html} Agent REST API doc}}.
 
 Demux
 
-  A key use for Chukwa is processing arriving data, in parallel, using Chukwa Demux.
-The most common way to do this is using the Chukwa demux framework.
+  A key use for Apache Chukwa is processing arriving data, in parallel, using Apache Chukwa Demux.
+The most common way to do this is using Apache Chukwa demux framework.
 As {{{./design.html}data flows through Chukwa}}, the demux parsers are often the
 first user defined function to process data.
 
-  By default, Chukwa will use the default TsProcessor. This parser will try to
+  By default, Apache Chukwa will use the default TsProcessor. This parser will try to
 extract the real log statement from the log entry using the ISO8601 date 
 format. If it fails, it will use the time at which the chunk was written to
 disk (agent timestamp).

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/apt/releasenotes.apt.vm
----------------------------------------------------------------------
diff --git a/src/site/apt/releasenotes.apt.vm b/src/site/apt/releasenotes.apt.vm
index 590867a..385a278 100644
--- a/src/site/apt/releasenotes.apt.vm
+++ b/src/site/apt/releasenotes.apt.vm
@@ -13,22 +13,23 @@
 ~~ See the License for the specific language governing permissions and
 ~~ limitations under the License.
 ~~
-Release Notes - Hadoop Chukwa - Version ${VERSION}
+Release Notes - Apache  Chukwa - Version ${VERSION}
 
 Overall Status
 
-  This is the fifth public release of Chukwa, a log analysis framework on top of Hadoop 
-and HBase.  Chukwa has been tested at scale and used in some production settings, and 
-is reasonably robust and well behaved. For instructions on setting up Chukwa, see the 
-administration guide and the rest of the Chukwa documentation.
+  This is the fifth public release of Apache Chukwa, a log analysis framework on top of 
+Apache Hadoop and Apache HBase.  Apache Chukwa has been tested at scale and used in some 
+production settings, and is reasonably robust and well behaved. For instructions on 
+setting up Apache Chukwa, see the administration guide and the rest of Apache Chukwa 
+documentation.
 
 Important Changes Since Last Release
 
   * New dashboard design.
 
-  * New Chukwa Parquet file format.
+  * New Apache Chukwa Parquet file format.
 
-  * New HBase schema for improved low latency read performance.
+  * New Apache HBase schema for improved low latency read performance.
 
   * New Solr log indexing support.
 
@@ -37,8 +38,9 @@ Important Changes Since Last Release
 
 Requirements
 
-  Chukwa relies on Java 1.6, and requires maven 3.0.3 to build.
-The back-end processing requires Hadoop ${hadoopVersion}, HBase ${hbaseVersion}+, and Solr ${solrVersion}+.
+  Apache Chukwa relies on Java 1.6, and requires maven 3.0.3 to build.
+The back-end processing requires Apache Hadoop ${hadoopVersion}, 
+Apache HBase ${hbaseVersion}+, and Apache Solr ${solrVersion}+.
 
 Known Limitations
 

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/apt/user.apt.vm
----------------------------------------------------------------------
diff --git a/src/site/apt/user.apt.vm b/src/site/apt/user.apt.vm
index 6c2a790..8e2ec52 100644
--- a/src/site/apt/user.apt.vm
+++ b/src/site/apt/user.apt.vm
@@ -14,69 +14,69 @@
 ~~ limitations under the License.
 ~~
 
-Chukwa User Guide
+Apache Chukwa User Guide
 
-  This chapter is the detailed configuration guide to Chukwa configuration.
+  This chapter is the detailed configuration guide to Apache Chukwa configuration.
 
   Please read this chapter carefully and ensure that all requirements have 
 been satisfied. Failure to do so will cause you (and us) grief debugging 
 strange errors and/or data loss.
 
-  Chukwa uses the same configuration system as Hadoop. To configure a deploy, 
+  Apache Chukwa uses the same configuration system as Apache Hadoop. To configure a deploy, 
 edit a file of environment variables in etc/chukwa/chukwa-env.sh -- this 
 configuration is used mostly by the launcher shell scripts getting the 
 cluster off the ground -- and then add configuration to an XML file to do 
-things like override Chukwa defaults, tell Chukwa what Filesystem to use, 
+things like override Apache Chukwa defaults, tell Apache Chukwa what Filesystem to use, 
 or the location of the HBase configuration.
 
-  When running in distributed mode, after you make an edit to an Chukwa 
+  When running in distributed mode, after you make an edit to an Apache Chukwa 
 configuration, make sure you copy the content of the conf directory to all 
-nodes of the cluster. Chukwa will not do this for you. Use rsync.
+nodes of the cluster. Apache Chukwa will not do this for you. Use rsync.
 
 Pre-requisites
 
-  Chukwa should work on any POSIX platform, but GNU/Linux is the only
-production platform that has been tested extensively. Chukwa has also been used
-successfully on Mac OS X, which several members of the Chukwa team use for 
+  Apache Chukwa should work on any POSIX platform, but GNU/Linux is the only
+production platform that has been tested extensively. Apache Chukwa has also been used
+successfully on Mac OS X, which several members of Apache Chukwa team use for 
 development.
 
   The only absolute software requirements are Java 1.6 or better,
-ZooKeeper {{${zookeeperVersion}}}, HBase {{${hbaseVersion}}} and Hadoop {{${hadoopVersion}}}.
+Apache ZooKeeper {{${zookeeperVersion}}}, Apache HBase {{${hbaseVersion}}} and Apache Hadoop {{${hadoopVersion}}}.
   
-  The Chukwa cluster management scripts rely on <ssh>; these scripts, however,
+  Apache Chukwa cluster management scripts rely on <ssh>; these scripts, however,
 are not required if you have some alternate mechanism for starting and stopping
 daemons.
 
-Installing Chukwa
+Installing Apache Chukwa
 
-  A minimal Chukwa deployment has five components:
+  A minimal Apache Chukwa deployment has five components:
 
-  * A Hadoop and HBase cluster on which Chukwa will process data (referred to as the Chukwa cluster).
+  * A Apache Hadoop and Apache HBase cluster on which Apache Chukwa will process data (referred to as Apache  Chukwa cluster).
 
   * One or more agent processes, that send monitoring data to HBase.
     The nodes with active agent processes are referred to as the monitored 
     source nodes.
 
-  * Data analytics script, summarize Hadoop Cluster Health.
+  * Data analytics script, summarize Apache Hadoop Cluster Health.
 
-  * HICC, the Chukwa visualization tool.
+  * HICC, Apache Chukwa visualization tool.
 
 []
 
-[./images/chukwa_architecture.png] Chukwa Components
+[./images/chukwa_architecture.png] Apache Chukwa Components
 
 * First Steps
 
-  * Obtain a copy of Chukwa. You can find the latest release on the 
-    {{{http://hadoop.apache.org/chukwa/releases.html} Chukwa release page}}.
+  * Obtain a copy of Apache Chukwa. You can find the latest release on the 
+    {{{http://hadoop.apache.org/chukwa/releases.html} Apache Chukwa release page}}.
 
   * Un-tar the release, via <tar xzf>.
 
-  * Make sure a copy of Chukwa is available on each node being monitored.
+  * Make sure a copy of Apache Chukwa is available on each node being monitored.
 
-  * We refer to the directory containing Chukwa as <CHUKWA_HOME>. It may
+  * We refer to the directory containing Apache Chukwa as <CHUKWA_HOME>. It may
 be helpful to set <CHUKWA_HOME> explicitly in your environment,
-but Chukwa does not require that you do so.
+but Apache Chukwa does not require that you do so.
 
 * General Configuration
 
@@ -84,16 +84,16 @@ but Chukwa does not require that you do so.
 It's generally best to set this in <CHUKWA_HOME/etc/chukwa/chukwa-env.sh>.
 
   * In <CHUKWA_HOME/etc/chukwa/chukwa-env.sh>, set <CHUKWA_LOG_DIR> and
-<CHUKWA_PID_DIR> to the directories where Chukwa should store its
+<CHUKWA_PID_DIR> to the directories where Apache Chukwa should store its
 console logs and pid files.  The pid directory must not be shared between
-different Chukwa instances: it should be local, not NFS-mounted.
+different Apache Chukwa instances: it should be local, not NFS-mounted.
 
   * Optionally, set CHUKWA_IDENT_STRING. This string is
- used to name Chukwa's own console log files.
+ used to name Apache Chukwa's own console log files.
 
 Agents
 
-  Agents are the Chukwa processes that actually produce data. This section
+  Agents are the Apache Chukwa processes that actually produce data. This section
 describes how to configure and run them. More details are available in the
 {{{./agent.html} Agent configuration guide}}.
 
@@ -101,12 +101,12 @@ describes how to configure and run them. More details are available in the
 
   First, edit <$CHUKWA_HOME/etc/chukwa/chukwa-env.sh> In addition to 
 the general directions given above, you should set <HADOOP_CONF_DIR> and
-<HBASE_CONF_DIR>.  This should be the Hadoop deployment Chukwa will use to 
+<HBASE_CONF_DIR>.  This should be Apache Hadoop deployment Apache Chukwa will use to 
 store collected data.  You will get a version mismatch error if this is 
 configured incorrectly.
 
   Edit the <CHUKWA_HOME/etc/chukwa/initial_adaptors> configuration file. 
-This is where you tell Chukwa what log files to monitor. See
+This is where you tell Apache Chukwa what log files to monitor. See
 {{{./agent.html#Adaptors} the adaptor configuration guide}} for
 a list of available adaptors.
 
@@ -127,7 +127,7 @@ machines.
 ---
 
   * Another important option is <chukwaAgent.checkpoint.dir>.
-This is the directory Chukwa will use for its periodic checkpoints of 
+This is the directory Apache Chukwa will use for its periodic checkpoints of 
 running adaptors.  It <<must not>> be a shared directory; use a local, 
 not NFS-mount, directory.
 
@@ -154,7 +154,7 @@ connections to the agent control socket.
 ** Use HDFS For Data Storage
 
   The one mandatory configuration parameter is <writer.hdfs.filesystem>.
-This should be set to the HDFS root URL on which Chukwa will store data.
+This should be set to the HDFS root URL on which Apache Chukwa will store data.
 Various optional configuration options are described in 
 {{{./pipeline.html} the pipeline configuration guide}}.
 
@@ -173,23 +173,23 @@ does the reverse.
 
   You can, of course, use any other daemon-management system you like. 
 For instance, <tools/init.d> includes init scripts for running
-Chukwa agents.
+Apache Chukwa agents.
 
   To check if an agent is working properly, you can telnet to the control
 port (9093 by default) and hit "enter". You will get a status message if
 the agent is running normally.
 
-Configuring Hadoop For Monitoring
+Configuring Apache Hadoop For Monitoring
 
-  One of the key goals for Chukwa is to collect logs from Hadoop clusters. 
-This section describes how to configure Hadoop to send its logs to Chukwa. 
-Note that these directions require Hadoop 0.205.0+.  Earlier versions of 
-Hadoop do not have the hooks that Chukwa requires in order to grab 
+  One of the key goals for Apache Chukwa is to collect logs from Apache Hadoop clusters. 
+This section describes how to configure Apache Hadoop to send its logs to Apache Chukwa. 
+Note that these directions require Apache Hadoop 0.205.0+.  Earlier versions of 
+Apache Hadoop do not have the hooks that Apache Chukwa requires in order to grab 
 MapReduce job logs.
 
-  The Hadoop configuration files are located in <HADOOP_HOME/etc/hadoop>.
-To setup Chukwa to collect logs from Hadoop, you need to change some of the 
-Hadoop configuration files.
+  Apache  Hadoop configuration files are located in <HADOOP_HOME/etc/hadoop>.
+To setup Apache Chukwa to collect logs from Apache Hadoop, you need to change some of 
+Apache Hadoop configuration files.
 
   * Copy CHUKWA_HOME/etc/chukwa/hadoop-log4j.properties file to HADOOP_CONF_DIR/log4j.properties
 
@@ -199,7 +199,7 @@ Hadoop configuration files.
 
 Setup HBase Table
 
-  Chukwa is moving towards a model of using HBase to store metrics data to 
+  Apache Chukwa is moving towards a model of using HBase to store metrics data to 
 allow real-time charting. This section describes how to configure HBase and 
 HICC to work together.
 
@@ -220,7 +220,7 @@ HICC
 
 * Starting, Stopping, And Monitoring
 
-  The Hadoop Infrastructure Care Center (HICC) is the Chukwa web user interface.
+  Hadoop Infrastructure Care Center (HICC) is Apache Chukwa web user interface.
 HICC is started by invoking
 
 ---
@@ -236,9 +236,9 @@ browser to:
 
 Troubleshooting Tips
 
-* UNIX Processes For Chukwa Data Processes
+* UNIX Processes For Apache Chukwa Data Processes
 
-  Chukwa Data Processors are identified by:
+  Apache Chukwa Data Processors are identified by:
 
 ---
   org.apache.hadoop.chukwa.datacollection.agent.ChukwaAgent

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/resources/images/asf_logo.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/asf_logo.png b/src/site/resources/images/asf_logo.png
new file mode 100644
index 0000000..b20bb7f
Binary files /dev/null and b/src/site/resources/images/asf_logo.png differ

http://git-wip-us.apache.org/repos/asf/chukwa/blob/20be5ae5/src/site/site.xml
----------------------------------------------------------------------
diff --git a/src/site/site.xml b/src/site/site.xml
index 25b246d..05476a0 100644
--- a/src/site/site.xml
+++ b/src/site/site.xml
@@ -17,12 +17,17 @@
    limitations under the License.
 -->
 
-<project name="Chukwa">
+<project name="Apache Chukwa">
   <bannerLeft>
+    <name>Apache</name>
+    <src>images/asf_logo.png</src>
+    <href>http://www.apache.org/</href>
+  </bannerLeft>
+  <bannerRight>
     <name>Chukwa</name>
     <src>images/chukwa_logo_small.jpg</src>
     <href>http://chukwa.apache.org/</href>
-  </bannerLeft>
+  </bannerRight>
   <skin>
     <groupId>org.apache.maven.skins</groupId>
     <artifactId>maven-fluido-skin</artifactId>


[2/2] chukwa git commit: CHUKWA-812. Added throttle to dashboard save. (Eric Yang)

Posted by ey...@apache.org.
CHUKWA-812.  Added throttle to dashboard save. (Eric Yang)


Project: http://git-wip-us.apache.org/repos/asf/chukwa/repo
Commit: http://git-wip-us.apache.org/repos/asf/chukwa/commit/fcfcc088
Tree: http://git-wip-us.apache.org/repos/asf/chukwa/tree/fcfcc088
Diff: http://git-wip-us.apache.org/repos/asf/chukwa/diff/fcfcc088

Branch: refs/heads/master
Commit: fcfcc088b61a262f81876e24ecce854e13cbd6c1
Parents: 20be5ae
Author: Eric Yang <ey...@apache.org>
Authored: Sat Nov 12 13:59:47 2016 -0800
Committer: Eric Yang <ey...@apache.org>
Committed: Sat Nov 12 13:59:47 2016 -0800

----------------------------------------------------------------------
 CHANGES.txt                           |  2 ++
 src/main/web/hicc/home/index.html     | 10 +++++-----
 src/main/web/hicc/home/js/throttle.js | 22 ++++++++++++++++++++++
 3 files changed, 29 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/chukwa/blob/fcfcc088/CHANGES.txt
----------------------------------------------------------------------
diff --git a/CHANGES.txt b/CHANGES.txt
index 44a5350..c7372d2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -8,6 +8,8 @@ Trunk (unreleased changes)
 
   BUGS
 
+    CHUKWA-812.  Added throttle to dashboard save. (Eric Yang)
+
 Release 0.8 - 05/22/2016
 
   IMPROVEMENTS

http://git-wip-us.apache.org/repos/asf/chukwa/blob/fcfcc088/src/main/web/hicc/home/index.html
----------------------------------------------------------------------
diff --git a/src/main/web/hicc/home/index.html b/src/main/web/hicc/home/index.html
index aefe260..412a3ba 100755
--- a/src/main/web/hicc/home/index.html
+++ b/src/main/web/hicc/home/index.html
@@ -32,6 +32,7 @@
     <link rel="stylesheet" type="text/css" href="css/component.css" />
     <script src="js/modernizr.custom.js"></script>
     <script src="js/jquery.js" type="text/javascript"></script>
+    <script src="js/throttle.js" type="text/javascript"></script>
     <script src="js/jquery-ui.js"></script>
     <script src="js/lodash.min.js" type="text/javascript"></script>
     <script src="js/gridstack.min.js" type="text/javascript"></script>
@@ -207,11 +208,6 @@ function load() {
         gridstack.addWidget(buildWidget(this.src), this.col, this.row, this.size_x, this.size_y);
 
       });
-      // Bind save operation only after load operation has been
-      // completed to avoid race conditions.
-      $('.grid-stack').on('change', function(event, ui) {
-        save();
-      });
     }
   );
 
@@ -343,6 +339,10 @@ $(function(){ //DOM Ready
     }
   );
 
+  $('.grid-stack').on('change', throttle(function(event, ui) {
+    save();
+  }, 250));
+
 });
 
 function setTime() {

http://git-wip-us.apache.org/repos/asf/chukwa/blob/fcfcc088/src/main/web/hicc/home/js/throttle.js
----------------------------------------------------------------------
diff --git a/src/main/web/hicc/home/js/throttle.js b/src/main/web/hicc/home/js/throttle.js
new file mode 100644
index 0000000..f8f7ff0
--- /dev/null
+++ b/src/main/web/hicc/home/js/throttle.js
@@ -0,0 +1,22 @@
+function throttle(fn, threshhold, scope) {
+  threshhold || (threshhold = 250);
+  var last,
+      deferTimer;
+  return function () {
+    var context = scope || this;
+
+    var now = +new Date,
+        args = arguments;
+    if (last && now < last + threshhold) {
+      // hold on to it
+      clearTimeout(deferTimer);
+      deferTimer = setTimeout(function () {
+        last = now;
+        fn.apply(context, args);
+      }, threshhold);
+    } else {
+      last = now;
+      fn.apply(context, args);
+    }
+  };
+}