You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@chukwa.apache.org by 良人 <zh...@163.com> on 2010/11/03 00:49:23 UTC

Data process for HICC

 
 HI :    I always would like to use  chukwa to analyze the hadoop of efficiency,but I  ran into several problems.

    firstly,i set up chukwa strictly following the instruction .my hicc work normally and can display graph if there are some data in mysql for instance: DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
    but some field record in mysql were not in mysql and they can not display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer, Node Activity Graph.
  my configure:
 chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
  bothhadoop-metrics.properties and hadoop log4j.propertieswere in hadoop of conf,i have list these documents in attachment.
  "System metrics collection may fail or be incomplete if your versions of sar and iostat do not match the ones that Chukwa expects" this citation come from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not match for chukwa, if so, what can i do for that.
  could anybody give me some suggestions, thank you very much,
  by the way do anybody know how to start hourlyRolling and dailyRoilling in 0.4.0 version and "Error initializing ChukwaClient with list of currently registered adaptors, clearing our local list of adaptors" in logs,how can i resolved it .







RE: Data process for HICC

Posted by ZJL <zh...@163.com>.
Another question, why don’t the chukwa.log exist in my chukwa?if anybody
know the cause, could you tell me, thank you。

 

 

 

From: chukwa-user-return-591-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-591-zhu121972=163.com@incubator.apache.org] On
Behalf Of ZJL
Sent: 2010年11月19日 9:32
To: chukwa-user@incubator.apache.org
Subject: RE: Data process for HICC

 

the copy of hadoop-metrics.properties, chukwa-hadoop-client.jar and json.jar
in hadoop directory.so which is not the root cause

 

From: chukwa-user-return-590-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-590-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
Sent: 2010年11月19日 1:06
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

 

Looks like your chukwa agent node ran out of disk space.  Make sure you have
chukwa copy of hadoop-metrics.properties, chukwa-hadoop-client.jar and json.
jar copied from chukwa to hadoop like the administration guide describes.

Regards,
Eric


On 11/18/10 5:15 AM, "ZJL" <zh...@163.com> wrote:

HI eric:
               You are right, the agent work abnormally, i didn’t find
chukwa-hdfs-jvm-*.log, chukwa-hdfs-dfs-*.log,chukwa-hdfs-rpc-*.log in my
system.
I just find some warning,I didn’t know what  meaning about that,could tell
me if you know that.
The following is warning in log:
WARNING: Going to buffer response body of large or unknown size. Using
getResponseBodyAsStream instead is recommended.
log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")
log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")
log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")
 
WARNING: Going to buffer response body of large or unknown size. Using
getResponseBodyAsStream instead is recommended.
Nov 10, 2010 2:29:33 AM org.apache.commons.httpclient.HttpMethodBase
getResponseBody
 
2010-11-10 13:20:13,778 WARN HTTP post thread ChukwaAgent - got commit up to
73283  for adaptor escaped newline CFTA-UTF8 that doesn't appear to be
running: 21 total
 
WARNING: Going to buffer response body of large or unknown size. Using
getResponseBodyAsStream instead is recommended.
log4j:ERROR Failed to flush writer,
java.io.IOException: No space left on device
         at java.io.FileOutputStream.writeBytes(Native Method)
         at java.io.FileOutputStream.write(FileOutputStream.java:260)
         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:57)
         at
org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:315)
         at
org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:234)
         at org.apache.log4j.WriterAppender.append(WriterAppender.java:159)
         at
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:230)
         at
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(Append
erAttachableImpl.java:65)
         at org.apache.log4j.Category.callAppenders(Category.java:203)
         at org.apache.log4j.Category.forcedLog(Category.java:388)
         at org.apache.log4j.Category.info(Category.java:663)
         at
org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor$RunToolTask.run(
ExecAdaptor.java:67)
         at java.util.TimerThread.mainLoop(Timer.java:512)
         at java.util.TimerThread.run(Timer.java:462)
 

From: chukwa-user-return-587-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-587-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
Sent: 2010年11月10日 1:27
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

Chukwa agent is not running on your system.  Check agent log file to see why
agent is not running.

Regards,
Eric

On 11/8/10 7:30 PM, "ZJL" <zh...@163.com> wrote:
Hi ecri:
    Telnent doesn’t work, I have tried so many times,in my system, I use
the ssh to access remote computer, how can I do by your mentioning method to
check the adaptor list.thank you 
 

From: chukwa-user-return-585-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-585-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
Sent: 2010年11月9日 2:28
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

Try:

telnet localhost 9093
list

See how many adaptor do you have on your machine?  For some reason, the
ChuwaDailyRollingAppender or Log4JMetricsContext is unable to talk to the
agent to register the log files.

If it is working properly, you should see adaptor listed similar to this:

adaptor_217ea6590b5749d07394bb3522f93a58)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped jvm 0
/chukwa/current/var/log/metrics/chukwa-hdfs-jvm-1285170111337.log 0
adaptor_e41369787a2b508486d0149f7b971223)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped dfs 0
/chukwa/current/var/log/metrics/chukwa-hdfs-dfs-1283736356808.log 325440
adaptor_098cf71f98cfe22f630f6fcd6e4bedfb)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped rpc 0
/chukwa/current/var/log/metrics/chukwa-hdfs-rpc-1283649648098.log 2838600

Hope this helps.

Regards,
Eric

On 11/8/10 5:48 AM, "良人" <zh...@163.com> wrote:
Hi eric:
1.    i have copyed hadoop-metrics.properties.template to
hadoop/conf/hadoop-metrics.properties and 
 copy chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib as well,but
the DFS metric cannot 
be scrapped.
2. i have not only  shuted down chukwa agent and hdfs, Remove all check
point files from chukwa/var, 
, restart chukwa agent then restart hadoop,but also format the namenode.but
the which still don't work,the error was not disappear.
the error exist in different files,the file i have uploaded to attachment.
could you help me check it? thank you.


At 2010-11-08 02:02:20,"Eric Yang" <er...@gmail.com> wrote:

>Did you copy hadoop-metrics.properties.template to
>hadoop/conf/hadoop-metrics.properties?  You also need to copy
>chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib for this to
>work.
>
>It looks like your check point file is out of sync with the hash map
>which kept track of the files in chukwa-hadoop client.  You might need
>to shut down chukwa agent and hdfs.  Remove all check point files from
>chukwa/var, and restart chukwa agent then restart hadoop.
>
>regards,
>Eric
>
>On Sun, Nov 7, 2010 at 2:42 AM, ZJL <zh...@163.com> wrote:
>> Hi eric:
>>    Thank you for your instruction,I also hope the new release of Chukwa
will come soon, but I still have some questions in my chukwa deployment.
>>  1.in my chukwa system,the DFS metric cannot be scrapped, for example:
DFS FS Name System Metircs,DFS Name Node Metrics etc.
>>  2. "Error initializing ChukwaClient with list of currentlyregistered
adaptors, clearing our local list of adaptors" in log,do you know what
deployment cause this problem.
>>
>> -----Original Message-----
>> From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
>> Sent: 2010年11月6日 7:19
>> To: chukwa-user@incubator.apache.org
>> Subject: Re: Data process for HICC
>>
>> 1. For system metrics, it is likely the output of sar and iostat do
>> not match of what Chukwa expects.  I found system utilities output to
>> be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
>> moved to Sigar for collecting system metrics.  This should improve the
>> problem that you were seeing.  Your original question is about node
>> activity, and HDFS heatmap.  Those metrics are not populated
>> automatically.  For node activity, Chukwa was based on Torque's
>> pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
>> need to have hdfs client trace and mr client trace log files stream
>> through Chukwa in order to generate graph for those metrics.  There is
>> no aggregation script to down sample the data for hdfs heatmap,
>> therefore only the last 6 hours is visible, if client trace log files
>> are processed by Chukwa.  There is a lot of work to change aggregation
>> from SQL to Pig+HBase.  However, most of the work is waiting for Pig
>> 0.8 to be release in order for Chukwa to start the implementation.
>> Therefore, you might need to wait for a while for the features to
>> appear.
>>
>> 2.  hourlyRolling and dailyRolling should run automatically after
>> starting with start-all.sh script.
>>
>> regards,
>> Eric
>>
>> On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
>>> HI eric:
>>> 1.In background,I have started dbAdmin and the dbAdmin.sh was
running,otherwise the dbbase would have nothing.in my database ,some of
field record have no data. not all. "System metrics collection may fail or
be incomplete if your versions of sar and iostat do not match the ones that
Chukwa expects" this citation come
>>> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is
not match for chukwa, if so, what can i do for that.
>>> 2.i don't know if hourlyRolling or dailyRolling automatically run,after
starting bin/start-all.sh
>>>
>>> -----Original Message-----
>>> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
>>> Sent: 2010年11月5日 8:39
>>> To: chukwa-user@incubator.apache.org
>>> Subject: Re: Data process for HICC
>>>
>>> Hi,
>>>
>>> This may be caused by dbAdmin.sh was not running in the background.
>>> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
>>> partitions from the template tables.  If the script is not running,
>>> the data might not get loaded.
>>>
>>> I am not sure about your question about hourlyRolling or dailyRolling.
>>>  Those processes should be handled by data processor (./bin/chukwa
>>> dp).
>>>
>>> regards,
>>> Eric
>>>
>>> 2010/11/2 良人 <zh...@163.com>:
>>>>
>>>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>>>> efficiency,but I  ran into several problems.
>>>>     firstly,i set up chukwa strictly following the instruction .my hicc
work
>>>> normally and can display graph if there are some data in mysql for
instance:
>>>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by
Percentage.
>>>>     but some field record in mysql were not in mysql and they can not
>>>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>>>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event
viewer,
>>>> Node Activity Graph.
>>>>   my configure:
>>>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>>>   both hadoop-metrics.properties and hadoop log4j.properties were in
hadoop
>>>> of conf,i have list these documents in attachment.
>>>>   "System metrics collection may fail or be incomplete if your versions
of
>>>> sar and iostat do not match the ones that Chukwa expects" this citation
come
>>>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is
not
>>>> match for chukwa, if so, what can i do for that.
>>>>   could anybody give me some suggestions, thank you very much,
>>>>   by the way do anybody know how to start hourlyRolling and
dailyRoilling in
>>>> 0.4.0 version and "Error initializing ChukwaClient with list of
currently
>>>> registered adaptors, clearing our local list of adaptors" in logs,how
can i
>>>> resolved it .
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>> __________ Information from ESET NOD32 Antivirus, version of virus
signature database 5592 (20101104) __________
>>>
>>> The message was checked by ESET NOD32 Antivirus.
>>>
>>> http://www.eset.com
>>>
>>>
>>>
>>>
>>>
>>
>> __________ Information from ESET NOD32 Antivirus, version of virus
signature database 5592 (20101104) __________
>>
>> The message was checked by ESET NOD32 Antivirus.
>>
>> http://www.eset.com
>>
>>
>>
>>
>>





__________ Information from ESET NOD32 Antivirus, version of virus signature
database 5599 (20101107) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


__________ Information from ESET NOD32 Antivirus, version of virus signature
database 5605 (20101109) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com



__________ Information from ESET NOD32 Antivirus, version of virus signature
database 5628 (20101118) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com



__________ Information from ESET NOD32 Antivirus, version of virus signature
database 5629 (20101118) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


RE: Data process for HICC

Posted by ZJL <zh...@163.com>.
the copy of hadoop-metrics.properties, chukwa-hadoop-client.jar and json.jar
in hadoop directory.so which is not the root cause

 

From: chukwa-user-return-590-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-590-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
Sent: 2010年11月19日 1:06
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

 

Looks like your chukwa agent node ran out of disk space.  Make sure you have
chukwa copy of hadoop-metrics.properties, chukwa-hadoop-client.jar and json.
jar copied from chukwa to hadoop like the administration guide describes.

Regards,
Eric


On 11/18/10 5:15 AM, "ZJL" <zh...@163.com> wrote:

HI eric:
               You are right, the agent work abnormally, i didn’t find
chukwa-hdfs-jvm-*.log, chukwa-hdfs-dfs-*.log,chukwa-hdfs-rpc-*.log in my
system.
I just find some warning,I didn’t know what  meaning about that,could tell
me if you know that.
The following is warning in log:
WARNING: Going to buffer response body of large or unknown size. Using
getResponseBodyAsStream instead is recommended.
log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")
log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")
log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")
 
WARNING: Going to buffer response body of large or unknown size. Using
getResponseBodyAsStream instead is recommended.
Nov 10, 2010 2:29:33 AM org.apache.commons.httpclient.HttpMethodBase
getResponseBody
 
2010-11-10 13:20:13,778 WARN HTTP post thread ChukwaAgent - got commit up to
73283  for adaptor escaped newline CFTA-UTF8 that doesn't appear to be
running: 21 total
 
WARNING: Going to buffer response body of large or unknown size. Using
getResponseBodyAsStream instead is recommended.
log4j:ERROR Failed to flush writer,
java.io.IOException: No space left on device
         at java.io.FileOutputStream.writeBytes(Native Method)
         at java.io.FileOutputStream.write(FileOutputStream.java:260)
         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:57)
         at
org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:315)
         at
org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:234)
         at org.apache.log4j.WriterAppender.append(WriterAppender.java:159)
         at
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:230)
         at
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(Append
erAttachableImpl.java:65)
         at org.apache.log4j.Category.callAppenders(Category.java:203)
         at org.apache.log4j.Category.forcedLog(Category.java:388)
         at org.apache.log4j.Category.info(Category.java:663)
         at
org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor$RunToolTask.run(
ExecAdaptor.java:67)
         at java.util.TimerThread.mainLoop(Timer.java:512)
         at java.util.TimerThread.run(Timer.java:462)
 

From: chukwa-user-return-587-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-587-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
Sent: 2010年11月10日 1:27
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

Chukwa agent is not running on your system.  Check agent log file to see why
agent is not running.

Regards,
Eric

On 11/8/10 7:30 PM, "ZJL" <zh...@163.com> wrote:
Hi ecri:
    Telnent doesn’t work, I have tried so many times,in my system, I use
the ssh to access remote computer, how can I do by your mentioning method to
check the adaptor list.thank you 
 

From: chukwa-user-return-585-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-585-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
Sent: 2010年11月9日 2:28
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

Try:

telnet localhost 9093
list

See how many adaptor do you have on your machine?  For some reason, the
ChuwaDailyRollingAppender or Log4JMetricsContext is unable to talk to the
agent to register the log files.

If it is working properly, you should see adaptor listed similar to this:

adaptor_217ea6590b5749d07394bb3522f93a58)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped jvm 0
/chukwa/current/var/log/metrics/chukwa-hdfs-jvm-1285170111337.log 0
adaptor_e41369787a2b508486d0149f7b971223)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped dfs 0
/chukwa/current/var/log/metrics/chukwa-hdfs-dfs-1283736356808.log 325440
adaptor_098cf71f98cfe22f630f6fcd6e4bedfb)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped rpc 0
/chukwa/current/var/log/metrics/chukwa-hdfs-rpc-1283649648098.log 2838600

Hope this helps.

Regards,
Eric

On 11/8/10 5:48 AM, "良人" <zh...@163.com> wrote:
Hi eric:
1.    i have copyed hadoop-metrics.properties.template to
hadoop/conf/hadoop-metrics.properties and 
 copy chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib as well,but
the DFS metric cannot 
be scrapped.
2. i have not only  shuted down chukwa agent and hdfs, Remove all check
point files from chukwa/var, 
, restart chukwa agent then restart hadoop,but also format the namenode.but
the which still don't work,the error was not disappear.
the error exist in different files,the file i have uploaded to attachment.
could you help me check it? thank you.


At 2010-11-08 02:02:20,"Eric Yang" <er...@gmail.com> wrote:

>Did you copy hadoop-metrics.properties.template to
>hadoop/conf/hadoop-metrics.properties?  You also need to copy
>chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib for this to
>work.
>
>It looks like your check point file is out of sync with the hash map
>which kept track of the files in chukwa-hadoop client.  You might need
>to shut down chukwa agent and hdfs.  Remove all check point files from
>chukwa/var, and restart chukwa agent then restart hadoop.
>
>regards,
>Eric
>
>On Sun, Nov 7, 2010 at 2:42 AM, ZJL <zh...@163.com> wrote:
>> Hi eric:
>>    Thank you for your instruction,I also hope the new release of Chukwa
will come soon, but I still have some questions in my chukwa deployment.
>>  1.in my chukwa system,the DFS metric cannot be scrapped, for example:
DFS FS Name System Metircs,DFS Name Node Metrics etc.
>>  2. "Error initializing ChukwaClient with list of currentlyregistered
adaptors, clearing our local list of adaptors" in log,do you know what
deployment cause this problem.
>>
>> -----Original Message-----
>> From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
>> Sent: 2010年11月6日 7:19
>> To: chukwa-user@incubator.apache.org
>> Subject: Re: Data process for HICC
>>
>> 1. For system metrics, it is likely the output of sar and iostat do
>> not match of what Chukwa expects.  I found system utilities output to
>> be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
>> moved to Sigar for collecting system metrics.  This should improve the
>> problem that you were seeing.  Your original question is about node
>> activity, and HDFS heatmap.  Those metrics are not populated
>> automatically.  For node activity, Chukwa was based on Torque's
>> pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
>> need to have hdfs client trace and mr client trace log files stream
>> through Chukwa in order to generate graph for those metrics.  There is
>> no aggregation script to down sample the data for hdfs heatmap,
>> therefore only the last 6 hours is visible, if client trace log files
>> are processed by Chukwa.  There is a lot of work to change aggregation
>> from SQL to Pig+HBase.  However, most of the work is waiting for Pig
>> 0.8 to be release in order for Chukwa to start the implementation.
>> Therefore, you might need to wait for a while for the features to
>> appear.
>>
>> 2.  hourlyRolling and dailyRolling should run automatically after
>> starting with start-all.sh script.
>>
>> regards,
>> Eric
>>
>> On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
>>> HI eric:
>>> 1.In background,I have started dbAdmin and the dbAdmin.sh was
running,otherwise the dbbase would have nothing.in my database ,some of
field record have no data. not all. "System metrics collection may fail or
be incomplete if your versions of sar and iostat do not match the ones that
Chukwa expects" this citation come
>>> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is
not match for chukwa, if so, what can i do for that.
>>> 2.i don't know if hourlyRolling or dailyRolling automatically run,after
starting bin/start-all.sh
>>>
>>> -----Original Message-----
>>> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
>>> Sent: 2010年11月5日 8:39
>>> To: chukwa-user@incubator.apache.org
>>> Subject: Re: Data process for HICC
>>>
>>> Hi,
>>>
>>> This may be caused by dbAdmin.sh was not running in the background.
>>> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
>>> partitions from the template tables.  If the script is not running,
>>> the data might not get loaded.
>>>
>>> I am not sure about your question about hourlyRolling or dailyRolling.
>>>  Those processes should be handled by data processor (./bin/chukwa
>>> dp).
>>>
>>> regards,
>>> Eric
>>>
>>> 2010/11/2 良人 <zh...@163.com>:
>>>>
>>>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>>>> efficiency,but I  ran into several problems.
>>>>     firstly,i set up chukwa strictly following the instruction .my hicc
work
>>>> normally and can display graph if there are some data in mysql for
instance:
>>>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by
Percentage.
>>>>     but some field record in mysql were not in mysql and they can not
>>>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>>>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event
viewer,
>>>> Node Activity Graph.
>>>>   my configure:
>>>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>>>   both hadoop-metrics.properties and hadoop log4j.properties were in
hadoop
>>>> of conf,i have list these documents in attachment.
>>>>   "System metrics collection may fail or be incomplete if your versions
of
>>>> sar and iostat do not match the ones that Chukwa expects" this citation
come
>>>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is
not
>>>> match for chukwa, if so, what can i do for that.
>>>>   could anybody give me some suggestions, thank you very much,
>>>>   by the way do anybody know how to start hourlyRolling and
dailyRoilling in
>>>> 0.4.0 version and "Error initializing ChukwaClient with list of
currently
>>>> registered adaptors, clearing our local list of adaptors" in logs,how
can i
>>>> resolved it .
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>> __________ Information from ESET NOD32 Antivirus, version of virus
signature database 5592 (20101104) __________
>>>
>>> The message was checked by ESET NOD32 Antivirus.
>>>
>>> http://www.eset.com
>>>
>>>
>>>
>>>
>>>
>>
>> __________ Information from ESET NOD32 Antivirus, version of virus
signature database 5592 (20101104) __________
>>
>> The message was checked by ESET NOD32 Antivirus.
>>
>> http://www.eset.com
>>
>>
>>
>>
>>





__________ Information from ESET NOD32 Antivirus, version of virus signature
database 5599 (20101107) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


__________ Information from ESET NOD32 Antivirus, version of virus signature
database 5605 (20101109) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com



__________ Information from ESET NOD32 Antivirus, version of virus signature
database 5628 (20101118) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


Re: Data process for HICC

Posted by Eric Yang <ey...@yahoo-inc.com>.
Looks like your chukwa agent node ran out of disk space.  Make sure you have chukwa copy of hadoop-metrics.properties, chukwa-hadoop-client.jar and json.jar copied from chukwa to hadoop like the administration guide describes.

Regards,
Eric


On 11/18/10 5:15 AM, "ZJL" <zh...@163.com> wrote:

HI eric:
               You are right, the agent work abnormally, i didn’t find chukwa-hdfs-jvm-*.log, chukwa-hdfs-dfs-*.log,chukwa-hdfs-rpc-*.log in my system.
I just find some warning,I didn’t know what  meaning about that,could tell me if you know that.
The following is warning in log:
WARNING: Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")
log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")
log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")

WARNING: Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
Nov 10, 2010 2:29:33 AM org.apache.commons.httpclient.HttpMethodBase getResponseBody

2010-11-10 13:20:13,778 WARN HTTP post thread ChukwaAgent - got commit up to 73283  for adaptor escaped newline CFTA-UTF8 that doesn't appear to be running: 21 total

WARNING: Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
log4j:ERROR Failed to flush writer,
java.io.IOException: No space left on device
         at java.io.FileOutputStream.writeBytes(Native Method)
         at java.io.FileOutputStream.write(FileOutputStream.java:260)
         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:57)
         at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:315)
         at org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:234)
         at org.apache.log4j.WriterAppender.append(WriterAppender.java:159)
         at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:230)
         at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:65)
         at org.apache.log4j.Category.callAppenders(Category.java:203)
         at org.apache.log4j.Category.forcedLog(Category.java:388)
         at org.apache.log4j.Category.info(Category.java:663)
         at org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor$RunToolTask.run(ExecAdaptor.java:67)
         at java.util.TimerThread.mainLoop(Timer.java:512)
         at java.util.TimerThread.run(Timer.java:462)


From: chukwa-user-return-587-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-587-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
Sent: 2010年11月10日 1:27
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

Chukwa agent is not running on your system.  Check agent log file to see why agent is not running.

Regards,
Eric

On 11/8/10 7:30 PM, "ZJL" <zh...@163.com> wrote:
Hi ecri:
    Telnent doesn’t work, I have tried so many times,in my system, I use the ssh to access remote computer, how can I do by your mentioning method to check the adaptor list.thank you


From: chukwa-user-return-585-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-585-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
Sent: 2010年11月9日 2:28
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

Try:

telnet localhost 9093
list

See how many adaptor do you have on your machine?  For some reason, the ChuwaDailyRollingAppender or Log4JMetricsContext is unable to talk to the agent to register the log files.

If it is working properly, you should see adaptor listed similar to this:

adaptor_217ea6590b5749d07394bb3522f93a58)  org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAdaptorUTF8NewLineEscaped jvm 0 /chukwa/current/var/log/metrics/chukwa-hdfs-jvm-1285170111337.log 0
adaptor_e41369787a2b508486d0149f7b971223)  org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAdaptorUTF8NewLineEscaped dfs 0 /chukwa/current/var/log/metrics/chukwa-hdfs-dfs-1283736356808.log 325440
adaptor_098cf71f98cfe22f630f6fcd6e4bedfb)  org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAdaptorUTF8NewLineEscaped rpc 0 /chukwa/current/var/log/metrics/chukwa-hdfs-rpc-1283649648098.log 2838600

Hope this helps.

Regards,
Eric

On 11/8/10 5:48 AM, "良人" <zh...@163.com> wrote:
Hi eric:
1.    i have copyed hadoop-metrics.properties.template to hadoop/conf/hadoop-metrics.properties and
 copy chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib as well,but the DFS metric cannot
be scrapped.
2. i have not only  shuted down chukwa agent and hdfs, Remove all check point files from chukwa/var,
, restart chukwa agent then restart hadoop,but also format the namenode.but the which still don't work,the error was not disappear.
the error exist in different files,the file i have uploaded to attachment. could you help me check it? thank you.


At 2010-11-08 02:02:20,"Eric Yang" <er...@gmail.com> wrote:

>Did you copy hadoop-metrics.properties.template to
>hadoop/conf/hadoop-metrics.properties?  You also need to copy
>chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib for this to
>work.
>
>It looks like your check point file is out of sync with the hash map
>which kept track of the files in chukwa-hadoop client.  You might need
>to shut down chukwa agent and hdfs.  Remove all check point files from
>chukwa/var, and restart chukwa agent then restart hadoop.
>
>regards,
>Eric
>
>On Sun, Nov 7, 2010 at 2:42 AM, ZJL <zh...@163.com> wrote:
>> Hi eric:
>>    Thank you for your instruction,I also hope the new release of Chukwa will come soon, but I still have some questions in my chukwa deployment.
>>  1.in my chukwa system,the DFS metric cannot be scrapped, for example: DFS FS Name System Metircs,DFS Name Node Metrics etc.
>>  2. "Error initializing ChukwaClient with list of currentlyregistered adaptors, clearing our local list of adaptors" in log,do you know what deployment cause this problem.
>>
>> -----Original Message-----
>> From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
>> Sent: 2010年11月6日 7:19
>> To: chukwa-user@incubator.apache.org
>> Subject: Re: Data process for HICC
>>
>> 1. For system metrics, it is likely the output of sar and iostat do
>> not match of what Chukwa expects.  I found system utilities output to
>> be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
>> moved to Sigar for collecting system metrics.  This should improve the
>> problem that you were seeing.  Your original question is about node
>> activity, and HDFS heatmap.  Those metrics are not populated
>> automatically.  For node activity, Chukwa was based on Torque's
>> pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
>> need to have hdfs client trace and mr client trace log files stream
>> through Chukwa in order to generate graph for those metrics.  There is
>> no aggregation script to down sample the data for hdfs heatmap,
>> therefore only the last 6 hours is visible, if client trace log files
>> are processed by Chukwa.  There is a lot of work to change aggregation
>> from SQL to Pig+HBase.  However, most of the work is waiting for Pig
>> 0.8 to be release in order for Chukwa to start the implementation.
>> Therefore, you might need to wait for a while for the features to
>> appear.
>>
>> 2.  hourlyRolling and dailyRolling should run automatically after
>> starting with start-all.sh script.
>>
>> regards,
>> Eric
>>
>> On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
>>> HI eric:
>>> 1.In background,I have started dbAdmin and the dbAdmin.sh was running,otherwise the dbbase would have nothing.in my database ,some of field record have no data. not all. "System metrics collection may fail or be incomplete if your versions of sar and iostat do not match the ones that Chukwa expects" this citation come
>>> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is not match for chukwa, if so, what can i do for that.
>>> 2.i don't know if hourlyRolling or dailyRolling automatically run,after starting bin/start-all.sh
>>>
>>> -----Original Message-----
>>> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
>>> Sent: 2010年11月5日 8:39
>>> To: chukwa-user@incubator.apache.org
>>> Subject: Re: Data process for HICC
>>>
>>> Hi,
>>>
>>> This may be caused by dbAdmin.sh was not running in the background.
>>> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
>>> partitions from the template tables.  If the script is not running,
>>> the data might not get loaded.
>>>
>>> I am not sure about your question about hourlyRolling or dailyRolling.
>>>  Those processes should be handled by data processor (./bin/chukwa
>>> dp).
>>>
>>> regards,
>>> Eric
>>>
>>> 2010/11/2 良人 <zh...@163.com>:
>>>>
>>>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>>>> efficiency,but I  ran into several problems.
>>>>     firstly,i set up chukwa strictly following the instruction .my hicc work
>>>> normally and can display graph if there are some data in mysql for instance:
>>>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>>>>     but some field record in mysql were not in mysql and they can not
>>>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>>>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
>>>> Node Activity Graph.
>>>>   my configure:
>>>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>>>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
>>>> of conf,i have list these documents in attachment.
>>>>   "System metrics collection may fail or be incomplete if your versions of
>>>> sar and iostat do not match the ones that Chukwa expects" this citation come
>>>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
>>>> match for chukwa, if so, what can i do for that.
>>>>   could anybody give me some suggestions, thank you very much,
>>>>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
>>>> 0.4.0 version and "Error initializing ChukwaClient with list of currently
>>>> registered adaptors, clearing our local list of adaptors" in logs,how can i
>>>> resolved it .
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>>>
>>> The message was checked by ESET NOD32 Antivirus.
>>>
>>> http://www.eset.com
>>>
>>>
>>>
>>>
>>>
>>
>> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>>
>> The message was checked by ESET NOD32 Antivirus.
>>
>> http://www.eset.com
>>
>>
>>
>>
>>





__________ Information from ESET NOD32 Antivirus, version of virus signature database 5599 (20101107) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


__________ Information from ESET NOD32 Antivirus, version of virus signature database 5605 (20101109) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


RE: Data process for HICC

Posted by ZJL <zh...@163.com>.
HI eric:

               You are right, the agent work abnormally, i didn’t find
chukwa-hdfs-jvm-*.log, chukwa-hdfs-dfs-*.log,chukwa-hdfs-rpc-*.log in my
system.

I just find some warning,I didn’t know what  meaning about that,could tell
me if you know that.

The following is warning in log:

WARNING: Going to buffer response body of large or unknown size. Using
getResponseBodyAsStream instead is recommended.

log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")

log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")

log4j:ERROR cleanUpRegex == null || !cleanUpRegex.contains("$fileName")

 

WARNING: Going to buffer response body of large or unknown size. Using
getResponseBodyAsStream instead is recommended.

Nov 10, 2010 2:29:33 AM org.apache.commons.httpclient.HttpMethodBase
getResponseBody

 

2010-11-10 13:20:13,778 WARN HTTP post thread ChukwaAgent - got commit up to
73283  for adaptor escaped newline CFTA-UTF8 that doesn't appear to be
running: 21 total

 

WARNING: Going to buffer response body of large or unknown size. Using
getResponseBodyAsStream instead is recommended.

log4j:ERROR Failed to flush writer,

java.io.IOException: No space left on device

         at java.io.FileOutputStream.writeBytes(Native Method)

         at java.io.FileOutputStream.write(FileOutputStream.java:260)

         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)

         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)

         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)

         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)

         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)

         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:57)

         at
org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:315)

         at
org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:234)

         at org.apache.log4j.WriterAppender.append(WriterAppender.java:159)

         at
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:230)

         at
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(Append
erAttachableImpl.java:65)

         at org.apache.log4j.Category.callAppenders(Category.java:203)

         at org.apache.log4j.Category.forcedLog(Category.java:388)

         at org.apache.log4j.Category.info(Category.java:663)

         at
org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor$RunToolTask.run(
ExecAdaptor.java:67)

         at java.util.TimerThread.mainLoop(Timer.java:512)

         at java.util.TimerThread.run(Timer.java:462)

 

From: chukwa-user-return-587-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-587-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
Sent: 2010年11月10日 1:27
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

 

Chukwa agent is not running on your system.  Check agent log file to see why
agent is not running.

Regards,
Eric

On 11/8/10 7:30 PM, "ZJL" <zh...@163.com> wrote:

Hi ecri:
    Telnent doesn’t work, I have tried so many times,in my system, I use
the ssh to access remote computer, how can I do by your mentioning method to
check the adaptor list.thank you 
 

From: chukwa-user-return-585-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-585-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
Sent: 2010年11月9日 2:28
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

Try:

telnet localhost 9093
list

See how many adaptor do you have on your machine?  For some reason, the
ChuwaDailyRollingAppender or Log4JMetricsContext is unable to talk to the
agent to register the log files.

If it is working properly, you should see adaptor listed similar to this:

adaptor_217ea6590b5749d07394bb3522f93a58)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped jvm 0
/chukwa/current/var/log/metrics/chukwa-hdfs-jvm-1285170111337.log 0
adaptor_e41369787a2b508486d0149f7b971223)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped dfs 0
/chukwa/current/var/log/metrics/chukwa-hdfs-dfs-1283736356808.log 325440
adaptor_098cf71f98cfe22f630f6fcd6e4bedfb)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped rpc 0
/chukwa/current/var/log/metrics/chukwa-hdfs-rpc-1283649648098.log 2838600

Hope this helps.

Regards,
Eric

On 11/8/10 5:48 AM, "良人" <zh...@163.com> wrote:
Hi eric:
1.    i have copyed hadoop-metrics.properties.template to
hadoop/conf/hadoop-metrics.properties and 
 copy chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib as well,but
the DFS metric cannot 
be scrapped.
2. i have not only  shuted down chukwa agent and hdfs, Remove all check
point files from chukwa/var, 
, restart chukwa agent then restart hadoop,but also format the namenode.but
the which still don't work,the error was not disappear.
the error exist in different files,the file i have uploaded to attachment.
could you help me check it? thank you.


At 2010-11-08 02:02:20,"Eric Yang" <er...@gmail.com> wrote:

>Did you copy hadoop-metrics.properties.template to
>hadoop/conf/hadoop-metrics.properties?  You also need to copy
>chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib for this to
>work.
>
>It looks like your check point file is out of sync with the hash map
>which kept track of the files in chukwa-hadoop client.  You might need
>to shut down chukwa agent and hdfs.  Remove all check point files from
>chukwa/var, and restart chukwa agent then restart hadoop.
>
>regards,
>Eric
>
>On Sun, Nov 7, 2010 at 2:42 AM, ZJL <zh...@163.com> wrote:
>> Hi eric:
>>    Thank you for your instruction,I also hope the new release of Chukwa
will come soon, but I still have some questions in my chukwa deployment.
>>  1.in my chukwa system,the DFS metric cannot be scrapped, for example:
DFS FS Name System Metircs,DFS Name Node Metrics etc.
>>  2. "Error initializing ChukwaClient with list of currentlyregistered
adaptors, clearing our local list of adaptors" in log,do you know what
deployment cause this problem.
>>
>> -----Original Message-----
>> From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
>> Sent: 2010年11月6日 7:19
>> To: chukwa-user@incubator.apache.org
>> Subject: Re: Data process for HICC
>>
>> 1. For system metrics, it is likely the output of sar and iostat do
>> not match of what Chukwa expects.  I found system utilities output to
>> be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
>> moved to Sigar for collecting system metrics.  This should improve the
>> problem that you were seeing.  Your original question is about node
>> activity, and HDFS heatmap.  Those metrics are not populated
>> automatically.  For node activity, Chukwa was based on Torque's
>> pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
>> need to have hdfs client trace and mr client trace log files stream
>> through Chukwa in order to generate graph for those metrics.  There is
>> no aggregation script to down sample the data for hdfs heatmap,
>> therefore only the last 6 hours is visible, if client trace log files
>> are processed by Chukwa.  There is a lot of work to change aggregation
>> from SQL to Pig+HBase.  However, most of the work is waiting for Pig
>> 0.8 to be release in order for Chukwa to start the implementation.
>> Therefore, you might need to wait for a while for the features to
>> appear.
>>
>> 2.  hourlyRolling and dailyRolling should run automatically after
>> starting with start-all.sh script.
>>
>> regards,
>> Eric
>>
>> On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
>>> HI eric:
>>> 1.In background,I have started dbAdmin and the dbAdmin.sh was
running,otherwise the dbbase would have nothing.in my database ,some of
field record have no data. not all. "System metrics collection may fail or
be incomplete if your versions of sar and iostat do not match the ones that
Chukwa expects" this citation come
>>> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is
not match for chukwa, if so, what can i do for that.
>>> 2.i don't know if hourlyRolling or dailyRolling automatically run,after
starting bin/start-all.sh
>>>
>>> -----Original Message-----
>>> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
>>> Sent: 2010年11月5日 8:39
>>> To: chukwa-user@incubator.apache.org
>>> Subject: Re: Data process for HICC
>>>
>>> Hi,
>>>
>>> This may be caused by dbAdmin.sh was not running in the background.
>>> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
>>> partitions from the template tables.  If the script is not running,
>>> the data might not get loaded.
>>>
>>> I am not sure about your question about hourlyRolling or dailyRolling.
>>>  Those processes should be handled by data processor (./bin/chukwa
>>> dp).
>>>
>>> regards,
>>> Eric
>>>
>>> 2010/11/2 良人 <zh...@163.com>:
>>>>
>>>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>>>> efficiency,but I  ran into several problems.
>>>>     firstly,i set up chukwa strictly following the instruction .my hicc
work
>>>> normally and can display graph if there are some data in mysql for
instance:
>>>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by
Percentage.
>>>>     but some field record in mysql were not in mysql and they can not
>>>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>>>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event
viewer,
>>>> Node Activity Graph.
>>>>   my configure:
>>>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>>>   both hadoop-metrics.properties and hadoop log4j.properties were in
hadoop
>>>> of conf,i have list these documents in attachment.
>>>>   "System metrics collection may fail or be incomplete if your versions
of
>>>> sar and iostat do not match the ones that Chukwa expects" this citation
come
>>>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is
not
>>>> match for chukwa, if so, what can i do for that.
>>>>   could anybody give me some suggestions, thank you very much,
>>>>   by the way do anybody know how to start hourlyRolling and
dailyRoilling in
>>>> 0.4.0 version and "Error initializing ChukwaClient with list of
currently
>>>> registered adaptors, clearing our local list of adaptors" in logs,how
can i
>>>> resolved it .
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>> __________ Information from ESET NOD32 Antivirus, version of virus
signature database 5592 (20101104) __________
>>>
>>> The message was checked by ESET NOD32 Antivirus.
>>>
>>> http://www.eset.com
>>>
>>>
>>>
>>>
>>>
>>
>> __________ Information from ESET NOD32 Antivirus, version of virus
signature database 5592 (20101104) __________
>>
>> The message was checked by ESET NOD32 Antivirus.
>>
>> http://www.eset.com
>>
>>
>>
>>
>>





__________ Information from ESET NOD32 Antivirus, version of virus signature
database 5599 (20101107) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com



__________ Information from ESET NOD32 Antivirus, version of virus signature
database 5605 (20101109) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


Re: Data process for HICC

Posted by Eric Yang <ey...@yahoo-inc.com>.
Chukwa agent is not running on your system.  Check agent log file to see why agent is not running.

Regards,
Eric

On 11/8/10 7:30 PM, "ZJL" <zh...@163.com> wrote:

Hi ecri:
    Telnent doesn’t work, I have tried so many times,in my system, I use the ssh to access remote computer, how can I do by your mentioning method to check the adaptor list.thank you


From: chukwa-user-return-585-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-585-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
Sent: 2010年11月9日 2:28
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

Try:

telnet localhost 9093
list

See how many adaptor do you have on your machine?  For some reason, the ChuwaDailyRollingAppender or Log4JMetricsContext is unable to talk to the agent to register the log files.

If it is working properly, you should see adaptor listed similar to this:

adaptor_217ea6590b5749d07394bb3522f93a58)  org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAdaptorUTF8NewLineEscaped jvm 0 /chukwa/current/var/log/metrics/chukwa-hdfs-jvm-1285170111337.log 0
adaptor_e41369787a2b508486d0149f7b971223)  org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAdaptorUTF8NewLineEscaped dfs 0 /chukwa/current/var/log/metrics/chukwa-hdfs-dfs-1283736356808.log 325440
adaptor_098cf71f98cfe22f630f6fcd6e4bedfb)  org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAdaptorUTF8NewLineEscaped rpc 0 /chukwa/current/var/log/metrics/chukwa-hdfs-rpc-1283649648098.log 2838600

Hope this helps.

Regards,
Eric

On 11/8/10 5:48 AM, "良人" <zh...@163.com> wrote:
Hi eric:
1.    i have copyed hadoop-metrics.properties.template to hadoop/conf/hadoop-metrics.properties and
 copy chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib as well,but the DFS metric cannot
be scrapped.
2. i have not only  shuted down chukwa agent and hdfs, Remove all check point files from chukwa/var,
, restart chukwa agent then restart hadoop,but also format the namenode.but the which still don't work,the error was not disappear.
the error exist in different files,the file i have uploaded to attachment. could you help me check it? thank you.


At 2010-11-08 02:02:20,"Eric Yang" <er...@gmail.com> wrote:

>Did you copy hadoop-metrics.properties.template to
>hadoop/conf/hadoop-metrics.properties?  You also need to copy
>chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib for this to
>work.
>
>It looks like your check point file is out of sync with the hash map
>which kept track of the files in chukwa-hadoop client.  You might need
>to shut down chukwa agent and hdfs.  Remove all check point files from
>chukwa/var, and restart chukwa agent then restart hadoop.
>
>regards,
>Eric
>
>On Sun, Nov 7, 2010 at 2:42 AM, ZJL <zh...@163.com> wrote:
>> Hi eric:
>>    Thank you for your instruction,I also hope the new release of Chukwa will come soon, but I still have some questions in my chukwa deployment.
>>  1.in my chukwa system,the DFS metric cannot be scrapped, for example: DFS FS Name System Metircs,DFS Name Node Metrics etc.
>>  2. "Error initializing ChukwaClient with list of currentlyregistered adaptors, clearing our local list of adaptors" in log,do you know what deployment cause this problem.
>>
>> -----Original Message-----
>> From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
>> Sent: 2010年11月6日 7:19
>> To: chukwa-user@incubator.apache.org
>> Subject: Re: Data process for HICC
>>
>> 1. For system metrics, it is likely the output of sar and iostat do
>> not match of what Chukwa expects.  I found system utilities output to
>> be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
>> moved to Sigar for collecting system metrics.  This should improve the
>> problem that you were seeing.  Your original question is about node
>> activity, and HDFS heatmap.  Those metrics are not populated
>> automatically.  For node activity, Chukwa was based on Torque's
>> pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
>> need to have hdfs client trace and mr client trace log files stream
>> through Chukwa in order to generate graph for those metrics.  There is
>> no aggregation script to down sample the data for hdfs heatmap,
>> therefore only the last 6 hours is visible, if client trace log files
>> are processed by Chukwa.  There is a lot of work to change aggregation
>> from SQL to Pig+HBase.  However, most of the work is waiting for Pig
>> 0.8 to be release in order for Chukwa to start the implementation.
>> Therefore, you might need to wait for a while for the features to
>> appear.
>>
>> 2.  hourlyRolling and dailyRolling should run automatically after
>> starting with start-all.sh script.
>>
>> regards,
>> Eric
>>
>> On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
>>> HI eric:
>>> 1.In background,I have started dbAdmin and the dbAdmin.sh was running,otherwise the dbbase would have nothing.in my database ,some of field record have no data. not all. "System metrics collection may fail or be incomplete if your versions of sar and iostat do not match the ones that Chukwa expects" this citation come
>>> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is not match for chukwa, if so, what can i do for that.
>>> 2.i don't know if hourlyRolling or dailyRolling automatically run,after starting bin/start-all.sh
>>>
>>> -----Original Message-----
>>> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
>>> Sent: 2010年11月5日 8:39
>>> To: chukwa-user@incubator.apache.org
>>> Subject: Re: Data process for HICC
>>>
>>> Hi,
>>>
>>> This may be caused by dbAdmin.sh was not running in the background.
>>> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
>>> partitions from the template tables.  If the script is not running,
>>> the data might not get loaded.
>>>
>>> I am not sure about your question about hourlyRolling or dailyRolling.
>>>  Those processes should be handled by data processor (./bin/chukwa
>>> dp).
>>>
>>> regards,
>>> Eric
>>>
>>> 2010/11/2 良人 <zh...@163.com>:
>>>>
>>>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>>>> efficiency,but I  ran into several problems.
>>>>     firstly,i set up chukwa strictly following the instruction .my hicc work
>>>> normally and can display graph if there are some data in mysql for instance:
>>>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>>>>     but some field record in mysql were not in mysql and they can not
>>>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>>>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
>>>> Node Activity Graph.
>>>>   my configure:
>>>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>>>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
>>>> of conf,i have list these documents in attachment.
>>>>   "System metrics collection may fail or be incomplete if your versions of
>>>> sar and iostat do not match the ones that Chukwa expects" this citation come
>>>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
>>>> match for chukwa, if so, what can i do for that.
>>>>   could anybody give me some suggestions, thank you very much,
>>>>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
>>>> 0.4.0 version and "Error initializing ChukwaClient with list of currently
>>>> registered adaptors, clearing our local list of adaptors" in logs,how can i
>>>> resolved it .
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>>>
>>> The message was checked by ESET NOD32 Antivirus.
>>>
>>> http://www.eset.com
>>>
>>>
>>>
>>>
>>>
>>
>> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>>
>> The message was checked by ESET NOD32 Antivirus.
>>
>> http://www.eset.com
>>
>>
>>
>>
>>





__________ Information from ESET NOD32 Antivirus, version of virus signature database 5599 (20101107) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


RE: Data process for HICC

Posted by ZJL <zh...@163.com>.
Hi ecri:

    Telnent doesn’t work, I have tried so many times,in my system, I use
the ssh to access remote computer, how can I do by your mentioning method to
check the adaptor list.thank you 

 

From: chukwa-user-return-585-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-585-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
Sent: 2010年11月9日 2:28
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

 

Try:

telnet localhost 9093
list

See how many adaptor do you have on your machine?  For some reason, the
ChuwaDailyRollingAppender or Log4JMetricsContext is unable to talk to the
agent to register the log files.

If it is working properly, you should see adaptor listed similar to this:

adaptor_217ea6590b5749d07394bb3522f93a58)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped jvm 0
/chukwa/current/var/log/metrics/chukwa-hdfs-jvm-1285170111337.log 0
adaptor_e41369787a2b508486d0149f7b971223)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped dfs 0
/chukwa/current/var/log/metrics/chukwa-hdfs-dfs-1283736356808.log 325440
adaptor_098cf71f98cfe22f630f6fcd6e4bedfb)
org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAd
aptorUTF8NewLineEscaped rpc 0
/chukwa/current/var/log/metrics/chukwa-hdfs-rpc-1283649648098.log 2838600

Hope this helps.

Regards,
Eric

On 11/8/10 5:48 AM, "良人" <zh...@163.com> wrote:

Hi eric:
1.    i have copyed hadoop-metrics.properties.template to
hadoop/conf/hadoop-metrics.properties and 
 copy chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib as well,but
the DFS metric cannot 
be scrapped.
2. i have not only  shuted down chukwa agent and hdfs, Remove all check
point files from chukwa/var, 
, restart chukwa agent then restart hadoop,but also format the namenode.but
the which still don't work,the error was not disappear.
the error exist in different files,the file i have uploaded to attachment.
could you help me check it? thank you.


At 2010-11-08 02:02:20,"Eric Yang" <er...@gmail.com> wrote:

>Did you copy hadoop-metrics.properties.template to
>hadoop/conf/hadoop-metrics.properties?  You also need to copy
>chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib for this to
>work.
>
>It looks like your check point file is out of sync with the hash map
>which kept track of the files in chukwa-hadoop client.  You might need
>to shut down chukwa agent and hdfs.  Remove all check point files from
>chukwa/var, and restart chukwa agent then restart hadoop.
>
>regards,
>Eric
>
>On Sun, Nov 7, 2010 at 2:42 AM, ZJL <zh...@163.com> wrote:
>> Hi eric:
>>    Thank you for your instruction,I also hope the new release of Chukwa
will come soon, but I still have some questions in my chukwa deployment.
>>  1.in my chukwa system,the DFS metric cannot be scrapped, for example:
DFS FS Name System Metircs,DFS Name Node Metrics etc.
>>  2. "Error initializing ChukwaClient with list of currentlyregistered
adaptors, clearing our local list of adaptors" in log,do you know what
deployment cause this problem.
>>
>> -----Original Message-----
>> From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
>> Sent: 2010年11月6日 7:19
>> To: chukwa-user@incubator.apache.org
>> Subject: Re: Data process for HICC
>>
>> 1. For system metrics, it is likely the output of sar and iostat do
>> not match of what Chukwa expects.  I found system utilities output to
>> be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
>> moved to Sigar for collecting system metrics.  This should improve the
>> problem that you were seeing.  Your original question is about node
>> activity, and HDFS heatmap.  Those metrics are not populated
>> automatically.  For node activity, Chukwa was based on Torque's
>> pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
>> need to have hdfs client trace and mr client trace log files stream
>> through Chukwa in order to generate graph for those metrics.  There is
>> no aggregation script to down sample the data for hdfs heatmap,
>> therefore only the last 6 hours is visible, if client trace log files
>> are processed by Chukwa.  There is a lot of work to change aggregation
>> from SQL to Pig+HBase.  However, most of the work is waiting for Pig
>> 0.8 to be release in order for Chukwa to start the implementation.
>> Therefore, you might need to wait for a while for the features to
>> appear.
>>
>> 2.  hourlyRolling and dailyRolling should run automatically after
>> starting with start-all.sh script.
>>
>> regards,
>> Eric
>>
>> On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
>>> HI eric:
>>> 1.In background,I have started dbAdmin and the dbAdmin.sh was
running,otherwise the dbbase would have nothing.in my database ,some of
field record have no data. not all. "System metrics collection may fail or
be incomplete if your versions of sar and iostat do not match the ones that
Chukwa expects" this citation come
>>> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is
not match for chukwa, if so, what can i do for that.
>>> 2.i don't know if hourlyRolling or dailyRolling automatically run,after
starting bin/start-all.sh
>>>
>>> -----Original Message-----
>>> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org
[mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On
Behalf Of Eric Yang
>>> Sent: 2010年11月5日 8:39
>>> To: chukwa-user@incubator.apache.org
>>> Subject: Re: Data process for HICC
>>>
>>> Hi,
>>>
>>> This may be caused by dbAdmin.sh was not running in the background.
>>> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
>>> partitions from the template tables.  If the script is not running,
>>> the data might not get loaded.
>>>
>>> I am not sure about your question about hourlyRolling or dailyRolling.
>>>  Those processes should be handled by data processor (./bin/chukwa
>>> dp).
>>>
>>> regards,
>>> Eric
>>>
>>> 2010/11/2 良人 <zh...@163.com>:
>>>>
>>>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>>>> efficiency,but I  ran into several problems.
>>>>     firstly,i set up chukwa strictly following the instruction .my hicc
work
>>>> normally and can display graph if there are some data in mysql for
instance:
>>>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by
Percentage.
>>>>     but some field record in mysql were not in mysql and they can not
>>>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>>>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event
viewer,
>>>> Node Activity Graph.
>>>>   my configure:
>>>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>>>   both hadoop-metrics.properties and hadoop log4j.properties were in
hadoop
>>>> of conf,i have list these documents in attachment.
>>>>   "System metrics collection may fail or be incomplete if your versions
of
>>>> sar and iostat do not match the ones that Chukwa expects" this citation
come
>>>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is
not
>>>> match for chukwa, if so, what can i do for that.
>>>>   could anybody give me some suggestions, thank you very much,
>>>>   by the way do anybody know how to start hourlyRolling and
dailyRoilling in
>>>> 0.4.0 version and "Error initializing ChukwaClient with list of
currently
>>>> registered adaptors, clearing our local list of adaptors" in logs,how
can i
>>>> resolved it .
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>> __________ Information from ESET NOD32 Antivirus, version of virus
signature database 5592 (20101104) __________
>>>
>>> The message was checked by ESET NOD32 Antivirus.
>>>
>>> http://www.eset.com
>>>
>>>
>>>
>>>
>>>
>>
>> __________ Information from ESET NOD32 Antivirus, version of virus
signature database 5592 (20101104) __________
>>
>> The message was checked by ESET NOD32 Antivirus.
>>
>> http://www.eset.com
>>
>>
>>
>>
>>






__________ Information from ESET NOD32 Antivirus, version of virus signature
database 5599 (20101107) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


Re: Data process for HICC

Posted by Eric Yang <ey...@yahoo-inc.com>.
Try:

telnet localhost 9093
list

See how many adaptor do you have on your machine?  For some reason, the ChuwaDailyRollingAppender or Log4JMetricsContext is unable to talk to the agent to register the log files.

If it is working properly, you should see adaptor listed similar to this:

adaptor_217ea6590b5749d07394bb3522f93a58)  org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAdaptorUTF8NewLineEscaped jvm 0 /chukwa/current/var/log/metrics/chukwa-hdfs-jvm-1285170111337.log 0
adaptor_e41369787a2b508486d0149f7b971223)  org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAdaptorUTF8NewLineEscaped dfs 0 /chukwa/current/var/log/metrics/chukwa-hdfs-dfs-1283736356808.log 325440
adaptor_098cf71f98cfe22f630f6fcd6e4bedfb)  org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAdaptorUTF8NewLineEscaped rpc 0 /chukwa/current/var/log/metrics/chukwa-hdfs-rpc-1283649648098.log 2838600

Hope this helps.

Regards,
Eric

On 11/8/10 5:48 AM, "良人" <zh...@163.com> wrote:

Hi eric:
1.    i have copyed hadoop-metrics.properties.template to hadoop/conf/hadoop-metrics.properties and
 copy chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib as well,but the DFS metric cannot
be scrapped.
2. i have not only  shuted down chukwa agent and hdfs, Remove all check point files from chukwa/var,
, restart chukwa agent then restart hadoop,but also format the namenode.but the which still don't work,the error was not disappear.
the error exist in different files,the file i have uploaded to attachment. could you help me check it? thank you.


At 2010-11-08 02:02:20,"Eric Yang" <er...@gmail.com> wrote:

>Did you copy hadoop-metrics.properties.template to
>hadoop/conf/hadoop-metrics.properties?  You also need to copy
>chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib for this to
>work.
>
>It looks like your check point file is out of sync with the hash map
>which kept track of the files in chukwa-hadoop client.  You might need
>to shut down chukwa agent and hdfs.  Remove all check point files from
>chukwa/var, and restart chukwa agent then restart hadoop.
>
>regards,
>Eric
>
>On Sun, Nov 7, 2010 at 2:42 AM, ZJL <zh...@163.com> wrote:
>> Hi eric:
>>    Thank you for your instruction,I also hope the new release of Chukwa will come soon, but I still have some questions in my chukwa deployment.
>>  1.in my chukwa system,the DFS metric cannot be scrapped, for example: DFS FS Name System Metircs,DFS Name Node Metrics etc.
>>  2. "Error initializing ChukwaClient with list of currentlyregistered adaptors, clearing our local list of adaptors" in log,do you know what deployment cause this problem.
>>
>> -----Original Message-----
>> From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
>> Sent: 2010年11月6日 7:19
>> To: chukwa-user@incubator.apache.org
>> Subject: Re: Data process for HICC
>>
>> 1. For system metrics, it is likely the output of sar and iostat do
>> not match of what Chukwa expects.  I found system utilities output to
>> be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
>> moved to Sigar for collecting system metrics.  This should improve the
>> problem that you were seeing.  Your original question is about node
>> activity, and HDFS heatmap.  Those metrics are not populated
>> automatically.  For node activity, Chukwa was based on Torque's
>> pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
>> need to have hdfs client trace and mr client trace log files stream
>> through Chukwa in order to generate graph for those metrics.  There is
>> no aggregation script to down sample the data for hdfs heatmap,
>> therefore only the last 6 hours is visible, if client trace log files
>> are processed by Chukwa.  There is a lot of work to change aggregation
>> from SQL to Pig+HBase.  However, most of the work is waiting for Pig
>> 0.8 to be release in order for Chukwa to start the implementation.
>> Therefore, you might need to wait for a while for the features to
>> appear.
>>
>> 2.  hourlyRolling and dailyRolling should run automatically after
>> starting with start-all.sh script.
>>
>> regards,
>> Eric
>>
>> On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
>>> HI eric:
>>> 1.In background,I have started dbAdmin and the dbAdmin.sh was running,otherwise the dbbase would have nothing.in my database ,some of field record have no data. not all. "System metrics collection may fail or be incomplete if your versions of sar and iostat do not match the ones that Chukwa expects" this citation come
>>> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is not match for chukwa, if so, what can i do for that.
>>> 2.i don't know if hourlyRolling or dailyRolling automatically run,after starting bin/start-all.sh
>>>
>>> -----Original Message-----
>>> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
>>> Sent: 2010年11月5日 8:39
>>> To: chukwa-user@incubator.apache.org
>>> Subject: Re: Data process for HICC
>>>
>>> Hi,
>>>
>>> This may be caused by dbAdmin.sh was not running in the background.
>>> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
>>> partitions from the template tables.  If the script is not running,
>>> the data might not get loaded.
>>>
>>> I am not sure about your question about hourlyRolling or dailyRolling.
>>>  Those processes should be handled by data processor (./bin/chukwa
>>> dp).
>>>
>>> regards,
>>> Eric
>>>
>>> 2010/11/2 良人 <zh...@163.com>:
>>>>
>>>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>>>> efficiency,but I  ran into several problems.
>>>>     firstly,i set up chukwa strictly following the instruction .my hicc work
>>>> normally and can display graph if there are some data in mysql for instance:
>>>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>>>>     but some field record in mysql were not in mysql and they can not
>>>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>>>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
>>>> Node Activity Graph.
>>>>   my configure:
>>>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>>>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
>>>> of conf,i have list these documents in attachment.
>>>>   "System metrics collection may fail or be incomplete if your versions of
>>>> sar and iostat do not match the ones that Chukwa expects" this citation come
>>>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
>>>> match for chukwa, if so, what can i do for that.
>>>>   could anybody give me some suggestions, thank you very much,
>>>>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
>>>> 0.4.0 version and "Error initializing ChukwaClient with list of currently
>>>> registered adaptors, clearing our local list of adaptors" in logs,how can i
>>>> resolved it .
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>>>
>>> The message was checked by ESET NOD32 Antivirus.
>>>
>>> http://www.eset.com
>>>
>>>
>>>
>>>
>>>
>>
>> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>>
>> The message was checked by ESET NOD32 Antivirus.
>>
>> http://www.eset.com
>>
>>
>>
>>
>>




Re:Re: Data process for HICC

Posted by 良人 <zh...@163.com>.
Hi eric:
1.    i have copyed hadoop-metrics.properties.template to hadoop/conf/hadoop-metrics.properties and

 copy chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib as well,but the DFS metric cannot 
be scrapped.
2. i have not only  shuted down chukwa agent and hdfs, Remove all check point files from chukwa/var, 
, restart chukwa agent then restart hadoop,but also format the namenode.but the which still don't work,the error was not disappear.
the error exist in different files,the file i have uploaded to attachment. could you help me check it? thank you.




At 2010-11-08 02:02:20,"Eric Yang" <er...@gmail.com> wrote:

>Did you copy hadoop-metrics.properties.template to
>hadoop/conf/hadoop-metrics.properties?  You also need to copy
>chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib for this to
>work.
>
>It looks like your check point file is out of sync with the hash map
>which kept track of the files in chukwa-hadoop client.  You might need
>to shut down chukwa agent and hdfs.  Remove all check point files from
>chukwa/var, and restart chukwa agent then restart hadoop.
>
>regards,
>Eric
>
>On Sun, Nov 7, 2010 at 2:42 AM, ZJL <zh...@163.com> wrote:
>> Hi eric:
>>    Thank you for your instruction,I also hope the new release of Chukwa will come soon, but I still have some questions in my chukwa deployment.
>>  1.in my chukwa system,the DFS metric cannot be scrapped, for example: DFS FS Name System Metircs,DFS Name Node Metrics etc.
>>  2. "Error initializing ChukwaClient with list of currentlyregistered adaptors, clearing our local list of adaptors" in log,do you know what deployment cause this problem.
>>
>> -----Original Message-----
>> From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
>> Sent: 2010年11月6日 7:19
>> To: chukwa-user@incubator.apache.org
>> Subject: Re: Data process for HICC
>>
>> 1. For system metrics, it is likely the output of sar and iostat do
>> not match of what Chukwa expects.  I found system utilities output to
>> be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
>> moved to Sigar for collecting system metrics.  This should improve the
>> problem that you were seeing.  Your original question is about node
>> activity, and HDFS heatmap.  Those metrics are not populated
>> automatically.  For node activity, Chukwa was based on Torque's
>> pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
>> need to have hdfs client trace and mr client trace log files stream
>> through Chukwa in order to generate graph for those metrics.  There is
>> no aggregation script to down sample the data for hdfs heatmap,
>> therefore only the last 6 hours is visible, if client trace log files
>> are processed by Chukwa.  There is a lot of work to change aggregation
>> from SQL to Pig+HBase.  However, most of the work is waiting for Pig
>> 0.8 to be release in order for Chukwa to start the implementation.
>> Therefore, you might need to wait for a while for the features to
>> appear.
>>
>> 2.  hourlyRolling and dailyRolling should run automatically after
>> starting with start-all.sh script.
>>
>> regards,
>> Eric
>>
>> On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
>>> HI eric:
>>> 1.In background,I have started dbAdmin and the dbAdmin.sh was running,otherwise the dbbase would have nothing.in my database ,some of field record have no data. not all. "System metrics collection may fail or be incomplete if your versions of sar and iostat do not match the ones that Chukwa expects" this citation come
>>> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is not match for chukwa, if so, what can i do for that.
>>> 2.i don't know if hourlyRolling or dailyRolling automatically run,after starting bin/start-all.sh
>>>
>>> -----Original Message-----
>>> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
>>> Sent: 2010年11月5日 8:39
>>> To: chukwa-user@incubator.apache.org
>>> Subject: Re: Data process for HICC
>>>
>>> Hi,
>>>
>>> This may be caused by dbAdmin.sh was not running in the background.
>>> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
>>> partitions from the template tables.  If the script is not running,
>>> the data might not get loaded.
>>>
>>> I am not sure about your question about hourlyRolling or dailyRolling.
>>>  Those processes should be handled by data processor (./bin/chukwa
>>> dp).
>>>
>>> regards,
>>> Eric
>>>
>>> 2010/11/2 良人 <zh...@163.com>:
>>>>
>>>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>>>> efficiency,but I  ran into several problems.
>>>>     firstly,i set up chukwa strictly following the instruction .my hicc work
>>>> normally and can display graph if there are some data in mysql for instance:
>>>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>>>>     but some field record in mysql were not in mysql and they can not
>>>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>>>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
>>>> Node Activity Graph.
>>>>   my configure:
>>>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>>>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
>>>> of conf,i have list these documents in attachment.
>>>>   "System metrics collection may fail or be incomplete if your versions of
>>>> sar and iostat do not match the ones that Chukwa expects" this citation come
>>>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
>>>> match for chukwa, if so, what can i do for that.
>>>>   could anybody give me some suggestions, thank you very much,
>>>>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
>>>> 0.4.0 version and "Error initializing ChukwaClient with list of currently
>>>> registered adaptors, clearing our local list of adaptors" in logs,how can i
>>>> resolved it .
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>>>
>>> The message was checked by ESET NOD32 Antivirus.
>>>
>>> http://www.eset.com
>>>
>>>
>>>
>>>
>>>
>>
>> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>>
>> The message was checked by ESET NOD32 Antivirus.
>>
>> http://www.eset.com
>>
>>
>>
>>
>>

Re: Data process for HICC

Posted by Eric Yang <er...@gmail.com>.
Did you copy hadoop-metrics.properties.template to
hadoop/conf/hadoop-metrics.properties?  You also need to copy
chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib for this to
work.

It looks like your check point file is out of sync with the hash map
which kept track of the files in chukwa-hadoop client.  You might need
to shut down chukwa agent and hdfs.  Remove all check point files from
chukwa/var, and restart chukwa agent then restart hadoop.

regards,
Eric

On Sun, Nov 7, 2010 at 2:42 AM, ZJL <zh...@163.com> wrote:
> Hi eric:
>    Thank you for your instruction,I also hope the new release of Chukwa will come soon, but I still have some questions in my chukwa deployment.
>  1.in my chukwa system,the DFS metric cannot be scrapped, for example: DFS FS Name System Metircs,DFS Name Node Metrics etc.
>  2. "Error initializing ChukwaClient with list of currentlyregistered adaptors, clearing our local list of adaptors" in log,do you know what deployment cause this problem.
>
> -----Original Message-----
> From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
> Sent: 2010年11月6日 7:19
> To: chukwa-user@incubator.apache.org
> Subject: Re: Data process for HICC
>
> 1. For system metrics, it is likely the output of sar and iostat do
> not match of what Chukwa expects.  I found system utilities output to
> be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
> moved to Sigar for collecting system metrics.  This should improve the
> problem that you were seeing.  Your original question is about node
> activity, and HDFS heatmap.  Those metrics are not populated
> automatically.  For node activity, Chukwa was based on Torque's
> pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
> need to have hdfs client trace and mr client trace log files stream
> through Chukwa in order to generate graph for those metrics.  There is
> no aggregation script to down sample the data for hdfs heatmap,
> therefore only the last 6 hours is visible, if client trace log files
> are processed by Chukwa.  There is a lot of work to change aggregation
> from SQL to Pig+HBase.  However, most of the work is waiting for Pig
> 0.8 to be release in order for Chukwa to start the implementation.
> Therefore, you might need to wait for a while for the features to
> appear.
>
> 2.  hourlyRolling and dailyRolling should run automatically after
> starting with start-all.sh script.
>
> regards,
> Eric
>
> On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
>> HI eric:
>> 1.In background,I have started dbAdmin and the dbAdmin.sh was running,otherwise the dbbase would have nothing.in my database ,some of field record have no data. not all. "System metrics collection may fail or be incomplete if your versions of sar and iostat do not match the ones that Chukwa expects" this citation come
>> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is not match for chukwa, if so, what can i do for that.
>> 2.i don't know if hourlyRolling or dailyRolling automatically run,after starting bin/start-all.sh
>>
>> -----Original Message-----
>> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
>> Sent: 2010年11月5日 8:39
>> To: chukwa-user@incubator.apache.org
>> Subject: Re: Data process for HICC
>>
>> Hi,
>>
>> This may be caused by dbAdmin.sh was not running in the background.
>> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
>> partitions from the template tables.  If the script is not running,
>> the data might not get loaded.
>>
>> I am not sure about your question about hourlyRolling or dailyRolling.
>>  Those processes should be handled by data processor (./bin/chukwa
>> dp).
>>
>> regards,
>> Eric
>>
>> 2010/11/2 良人 <zh...@163.com>:
>>>
>>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>>> efficiency,but I  ran into several problems.
>>>     firstly,i set up chukwa strictly following the instruction .my hicc work
>>> normally and can display graph if there are some data in mysql for instance:
>>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>>>     but some field record in mysql were not in mysql and they can not
>>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
>>> Node Activity Graph.
>>>   my configure:
>>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
>>> of conf,i have list these documents in attachment.
>>>   "System metrics collection may fail or be incomplete if your versions of
>>> sar and iostat do not match the ones that Chukwa expects" this citation come
>>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
>>> match for chukwa, if so, what can i do for that.
>>>   could anybody give me some suggestions, thank you very much,
>>>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
>>> 0.4.0 version and "Error initializing ChukwaClient with list of currently
>>> registered adaptors, clearing our local list of adaptors" in logs,how can i
>>> resolved it .
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>>
>> The message was checked by ESET NOD32 Antivirus.
>>
>> http://www.eset.com
>>
>>
>>
>>
>>
>
> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>
> The message was checked by ESET NOD32 Antivirus.
>
> http://www.eset.com
>
>
>
>
>

RE: Data process for HICC

Posted by ZJL <zh...@163.com>.
Hi eric:
    Thank you for your instruction,I also hope the new release of Chukwa will come soon, but I still have some questions in my chukwa deployment.
 1.in my chukwa system,the DFS metric cannot be scrapped, for example: DFS FS Name System Metircs,DFS Name Node Metrics etc.
 2. "Error initializing ChukwaClient with list of currentlyregistered adaptors, clearing our local list of adaptors" in log,do you know what deployment cause this problem.
  
-----Original Message-----
From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
Sent: 2010年11月6日 7:19
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

1. For system metrics, it is likely the output of sar and iostat do
not match of what Chukwa expects.  I found system utilities output to
be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
moved to Sigar for collecting system metrics.  This should improve the
problem that you were seeing.  Your original question is about node
activity, and HDFS heatmap.  Those metrics are not populated
automatically.  For node activity, Chukwa was based on Torque's
pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
need to have hdfs client trace and mr client trace log files stream
through Chukwa in order to generate graph for those metrics.  There is
no aggregation script to down sample the data for hdfs heatmap,
therefore only the last 6 hours is visible, if client trace log files
are processed by Chukwa.  There is a lot of work to change aggregation
from SQL to Pig+HBase.  However, most of the work is waiting for Pig
0.8 to be release in order for Chukwa to start the implementation.
Therefore, you might need to wait for a while for the features to
appear.

2.  hourlyRolling and dailyRolling should run automatically after
starting with start-all.sh script.

regards,
Eric

On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
> HI eric:
> 1.In background,I have started dbAdmin and the dbAdmin.sh was running,otherwise the dbbase would have nothing.in my database ,some of field record have no data. not all. "System metrics collection may fail or be incomplete if your versions of sar and iostat do not match the ones that Chukwa expects" this citation come
> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is not match for chukwa, if so, what can i do for that.
> 2.i don't know if hourlyRolling or dailyRolling automatically run,after starting bin/start-all.sh
>
> -----Original Message-----
> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
> Sent: 2010年11月5日 8:39
> To: chukwa-user@incubator.apache.org
> Subject: Re: Data process for HICC
>
> Hi,
>
> This may be caused by dbAdmin.sh was not running in the background.
> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
> partitions from the template tables.  If the script is not running,
> the data might not get loaded.
>
> I am not sure about your question about hourlyRolling or dailyRolling.
>  Those processes should be handled by data processor (./bin/chukwa
> dp).
>
> regards,
> Eric
>
> 2010/11/2 良人 <zh...@163.com>:
>>
>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>> efficiency,but I  ran into several problems.
>>     firstly,i set up chukwa strictly following the instruction .my hicc work
>> normally and can display graph if there are some data in mysql for instance:
>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>>     but some field record in mysql were not in mysql and they can not
>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
>> Node Activity Graph.
>>   my configure:
>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
>> of conf,i have list these documents in attachment.
>>   "System metrics collection may fail or be incomplete if your versions of
>> sar and iostat do not match the ones that Chukwa expects" this citation come
>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
>> match for chukwa, if so, what can i do for that.
>>   could anybody give me some suggestions, thank you very much,
>>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
>> 0.4.0 version and "Error initializing ChukwaClient with list of currently
>> registered adaptors, clearing our local list of adaptors" in logs,how can i
>> resolved it .
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>
> The message was checked by ESET NOD32 Antivirus.
>
> http://www.eset.com
>
>
>
>
>

__________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com





Re: Data process for HICC

Posted by Eric Yang <er...@gmail.com>.
Sigar library comes with Chukwa trunk, there is no installation required.

regards,
Eric

On Sun, Nov 7, 2010 at 3:19 AM, ZJL <zh...@163.com> wrote:
> HI eric :
>   in my ubuntu ,I just install sysstat to collect system metrics.Your mean that if my ubuntu need to install sigar.
> -----Original Message-----
> From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
> Sent: 2010年11月6日 7:19
> To: chukwa-user@incubator.apache.org
> Subject: Re: Data process for HICC
>
> 1. For system metrics, it is likely the output of sar and iostat do
> not match of what Chukwa expects.  I found system utilities output to
> be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
> moved to Sigar for collecting system metrics.  This should improve the
> problem that you were seeing.  Your original question is about node
> activity, and HDFS heatmap.  Those metrics are not populated
> automatically.  For node activity, Chukwa was based on Torque's
> pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
> need to have hdfs client trace and mr client trace log files stream
> through Chukwa in order to generate graph for those metrics.  There is
> no aggregation script to down sample the data for hdfs heatmap,
> therefore only the last 6 hours is visible, if client trace log files
> are processed by Chukwa.  There is a lot of work to change aggregation
> from SQL to Pig+HBase.  However, most of the work is waiting for Pig
> 0.8 to be release in order for Chukwa to start the implementation.
> Therefore, you might need to wait for a while for the features to
> appear.
>
> 2.  hourlyRolling and dailyRolling should run automatically after
> starting with start-all.sh script.
>
> regards,
> Eric
>
> On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
>> HI eric:
>> 1.In background,I have started dbAdmin and the dbAdmin.sh was running,otherwise the dbbase would have nothing.in my database ,some of field record have no data. not all. "System metrics collection may fail or be incomplete if your versions of sar and iostat do not match the ones that Chukwa expects" this citation come
>> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is not match for chukwa, if so, what can i do for that.
>> 2.i don't know if hourlyRolling or dailyRolling automatically run,after starting bin/start-all.sh
>>
>> -----Original Message-----
>> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
>> Sent: 2010年11月5日 8:39
>> To: chukwa-user@incubator.apache.org
>> Subject: Re: Data process for HICC
>>
>> Hi,
>>
>> This may be caused by dbAdmin.sh was not running in the background.
>> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
>> partitions from the template tables.  If the script is not running,
>> the data might not get loaded.
>>
>> I am not sure about your question about hourlyRolling or dailyRolling.
>>  Those processes should be handled by data processor (./bin/chukwa
>> dp).
>>
>> regards,
>> Eric
>>
>> 2010/11/2 良人 <zh...@163.com>:
>>>
>>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>>> efficiency,but I  ran into several problems.
>>>     firstly,i set up chukwa strictly following the instruction .my hicc work
>>> normally and can display graph if there are some data in mysql for instance:
>>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>>>     but some field record in mysql were not in mysql and they can not
>>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
>>> Node Activity Graph.
>>>   my configure:
>>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
>>> of conf,i have list these documents in attachment.
>>>   "System metrics collection may fail or be incomplete if your versions of
>>> sar and iostat do not match the ones that Chukwa expects" this citation come
>>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
>>> match for chukwa, if so, what can i do for that.
>>>   could anybody give me some suggestions, thank you very much,
>>>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
>>> 0.4.0 version and "Error initializing ChukwaClient with list of currently
>>> registered adaptors, clearing our local list of adaptors" in logs,how can i
>>> resolved it .
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>>
>> The message was checked by ESET NOD32 Antivirus.
>>
>> http://www.eset.com
>>
>>
>>
>>
>>
>
> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>
> The message was checked by ESET NOD32 Antivirus.
>
> http://www.eset.com
>
>
>
>
>

RE: Data process for HICC

Posted by ZJL <zh...@163.com>.
HI eric :
   in my ubuntu ,I just install sysstat to collect system metrics.Your mean that if my ubuntu need to install sigar. 
-----Original Message-----
From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
Sent: 2010年11月6日 7:19
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

1. For system metrics, it is likely the output of sar and iostat do
not match of what Chukwa expects.  I found system utilities output to
be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
moved to Sigar for collecting system metrics.  This should improve the
problem that you were seeing.  Your original question is about node
activity, and HDFS heatmap.  Those metrics are not populated
automatically.  For node activity, Chukwa was based on Torque's
pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
need to have hdfs client trace and mr client trace log files stream
through Chukwa in order to generate graph for those metrics.  There is
no aggregation script to down sample the data for hdfs heatmap,
therefore only the last 6 hours is visible, if client trace log files
are processed by Chukwa.  There is a lot of work to change aggregation
from SQL to Pig+HBase.  However, most of the work is waiting for Pig
0.8 to be release in order for Chukwa to start the implementation.
Therefore, you might need to wait for a while for the features to
appear.

2.  hourlyRolling and dailyRolling should run automatically after
starting with start-all.sh script.

regards,
Eric

On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
> HI eric:
> 1.In background,I have started dbAdmin and the dbAdmin.sh was running,otherwise the dbbase would have nothing.in my database ,some of field record have no data. not all. "System metrics collection may fail or be incomplete if your versions of sar and iostat do not match the ones that Chukwa expects" this citation come
> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is not match for chukwa, if so, what can i do for that.
> 2.i don't know if hourlyRolling or dailyRolling automatically run,after starting bin/start-all.sh
>
> -----Original Message-----
> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
> Sent: 2010年11月5日 8:39
> To: chukwa-user@incubator.apache.org
> Subject: Re: Data process for HICC
>
> Hi,
>
> This may be caused by dbAdmin.sh was not running in the background.
> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
> partitions from the template tables.  If the script is not running,
> the data might not get loaded.
>
> I am not sure about your question about hourlyRolling or dailyRolling.
>  Those processes should be handled by data processor (./bin/chukwa
> dp).
>
> regards,
> Eric
>
> 2010/11/2 良人 <zh...@163.com>:
>>
>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>> efficiency,but I  ran into several problems.
>>     firstly,i set up chukwa strictly following the instruction .my hicc work
>> normally and can display graph if there are some data in mysql for instance:
>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>>     but some field record in mysql were not in mysql and they can not
>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
>> Node Activity Graph.
>>   my configure:
>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
>> of conf,i have list these documents in attachment.
>>   "System metrics collection may fail or be incomplete if your versions of
>> sar and iostat do not match the ones that Chukwa expects" this citation come
>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
>> match for chukwa, if so, what can i do for that.
>>   could anybody give me some suggestions, thank you very much,
>>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
>> 0.4.0 version and "Error initializing ChukwaClient with list of currently
>> registered adaptors, clearing our local list of adaptors" in logs,how can i
>> resolved it .
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>
> The message was checked by ESET NOD32 Antivirus.
>
> http://www.eset.com
>
>
>
>
>

__________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com





Re: Data process for HICC

Posted by Eric Yang <er...@gmail.com>.
1. For system metrics, it is likely the output of sar and iostat do
not match of what Chukwa expects.  I found system utilities output to
be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
moved to Sigar for collecting system metrics.  This should improve the
problem that you were seeing.  Your original question is about node
activity, and HDFS heatmap.  Those metrics are not populated
automatically.  For node activity, Chukwa was based on Torque's
pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
need to have hdfs client trace and mr client trace log files stream
through Chukwa in order to generate graph for those metrics.  There is
no aggregation script to down sample the data for hdfs heatmap,
therefore only the last 6 hours is visible, if client trace log files
are processed by Chukwa.  There is a lot of work to change aggregation
from SQL to Pig+HBase.  However, most of the work is waiting for Pig
0.8 to be release in order for Chukwa to start the implementation.
Therefore, you might need to wait for a while for the features to
appear.

2.  hourlyRolling and dailyRolling should run automatically after
starting with start-all.sh script.

regards,
Eric

On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zh...@163.com> wrote:
> HI eric:
> 1.In background,I have started dbAdmin and the dbAdmin.sh was running,otherwise the dbbase would have nothing.in my database ,some of field record have no data. not all. "System metrics collection may fail or be incomplete if your versions of sar and iostat do not match the ones that Chukwa expects" this citation come
> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is not match for chukwa, if so, what can i do for that.
> 2.i don't know if hourlyRolling or dailyRolling automatically run,after starting bin/start-all.sh
>
> -----Original Message-----
> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
> Sent: 2010年11月5日 8:39
> To: chukwa-user@incubator.apache.org
> Subject: Re: Data process for HICC
>
> Hi,
>
> This may be caused by dbAdmin.sh was not running in the background.
> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
> partitions from the template tables.  If the script is not running,
> the data might not get loaded.
>
> I am not sure about your question about hourlyRolling or dailyRolling.
>  Those processes should be handled by data processor (./bin/chukwa
> dp).
>
> regards,
> Eric
>
> 2010/11/2 良人 <zh...@163.com>:
>>
>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>> efficiency,but I  ran into several problems.
>>     firstly,i set up chukwa strictly following the instruction .my hicc work
>> normally and can display graph if there are some data in mysql for instance:
>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>>     but some field record in mysql were not in mysql and they can not
>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
>> Node Activity Graph.
>>   my configure:
>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
>> of conf,i have list these documents in attachment.
>>   "System metrics collection may fail or be incomplete if your versions of
>> sar and iostat do not match the ones that Chukwa expects" this citation come
>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
>> match for chukwa, if so, what can i do for that.
>>   could anybody give me some suggestions, thank you very much,
>>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
>> 0.4.0 version and "Error initializing ChukwaClient with list of currently
>> registered adaptors, clearing our local list of adaptors" in logs,how can i
>> resolved it .
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
> __________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________
>
> The message was checked by ESET NOD32 Antivirus.
>
> http://www.eset.com
>
>
>
>
>

RE: Data process for HICC

Posted by ZJL <zh...@163.com>.
HI eric:
1.In background,I have started dbAdmin and the dbAdmin.sh was running,otherwise the dbbase would have nothing.in my database ,some of field record have no data. not all. "System metrics collection may fail or be incomplete if your versions of sar and iostat do not match the ones that Chukwa expects" this citation come
from chukwa releasenotes, i suspect if my sysstat version of ubuntu is not match for chukwa, if so, what can i do for that.
2.i don't know if hourlyRolling or dailyRolling automatically run,after starting bin/start-all.sh 

-----Original Message-----
From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org] On Behalf Of Eric Yang
Sent: 2010年11月5日 8:39
To: chukwa-user@incubator.apache.org
Subject: Re: Data process for HICC

Hi,

This may be caused by dbAdmin.sh was not running in the background.
In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
partitions from the template tables.  If the script is not running,
the data might not get loaded.

I am not sure about your question about hourlyRolling or dailyRolling.
 Those processes should be handled by data processor (./bin/chukwa
dp).

regards,
Eric

2010/11/2 良人 <zh...@163.com>:
>
>  HI :    I always would like to use  chukwa to analyze the hadoop of
> efficiency,but I  ran into several problems.
>     firstly,i set up chukwa strictly following the instruction .my hicc work
> normally and can display graph if there are some data in mysql for instance:
> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>     but some field record in mysql were not in mysql and they can not
> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
> Node Activity Graph.
>   my configure:
>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
> of conf,i have list these documents in attachment.
>   "System metrics collection may fail or be incomplete if your versions of
> sar and iostat do not match the ones that Chukwa expects" this citation come
> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
> match for chukwa, if so, what can i do for that.
>   could anybody give me some suggestions, thank you very much,
>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
> 0.4.0 version and "Error initializing ChukwaClient with list of currently
> registered adaptors, clearing our local list of adaptors" in logs,how can i
> resolved it .
>
>
>
>
>
>
>
>
>

__________ Information from ESET NOD32 Antivirus, version of virus signature database 5592 (20101104) __________

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com





Re: Data process for HICC

Posted by Eric Yang <er...@gmail.com>.
Hi,

This may be caused by dbAdmin.sh was not running in the background.
In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
partitions from the template tables.  If the script is not running,
the data might not get loaded.

I am not sure about your question about hourlyRolling or dailyRolling.
 Those processes should be handled by data processor (./bin/chukwa
dp).

regards,
Eric

2010/11/2 良人 <zh...@163.com>:
>
>  HI :    I always would like to use  chukwa to analyze the hadoop of
> efficiency,but I  ran into several problems.
>     firstly,i set up chukwa strictly following the instruction .my hicc work
> normally and can display graph if there are some data in mysql for instance:
> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>     but some field record in mysql were not in mysql and they can not
> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
> Node Activity Graph.
>   my configure:
>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
> of conf,i have list these documents in attachment.
>   "System metrics collection may fail or be incomplete if your versions of
> sar and iostat do not match the ones that Chukwa expects" this citation come
> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
> match for chukwa, if so, what can i do for that.
>   could anybody give me some suggestions, thank you very much,
>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
> 0.4.0 version and "Error initializing ChukwaClient with list of currently
> registered adaptors, clearing our local list of adaptors" in logs,how can i
> resolved it .
>
>
>
>
>
>
>
>
>