You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Michael Harp (JIRA)" <ji...@apache.org> on 2014/05/15 01:10:35 UTC

[jira] [Commented] (AMBARI-5750) Ambari not configuring Ganglia/gmetad correctly, leading to log spam

    [ https://issues.apache.org/jira/browse/AMBARI-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13998248#comment-13998248 ] 

Michael Harp commented on AMBARI-5750:
--------------------------------------

Commenting out data_source entries for services not installed in /etc/ganglia/hdp/gmetad.conf removes the errors in /var/log/messages.

{code}
data_source "HDPJournalNode" gear2.labs.teradata.com:8654
#data_source "HDPFlumeServer" gear2.labs.teradata.com:8655
data_source "HDPHBaseRegionServer" gear2.labs.teradata.com:8656
data_source "HDPNodeManager" gear2.labs.teradata.com:8657
#data_source "HDPTaskTracker" gear2.labs.teradata.com:8658
data_source "HDPDataNode" gear2.labs.teradata.com:8659
data_source "HDPSlaves" gear2.labs.teradata.com:8660
data_source "HDPNameNode" gear2.labs.teradata.com:8661
#data_source "HDPJobTracker" gear2.labs.teradata.com:8662
data_source "HDPHBaseMaster" gear2.labs.teradata.com:8663
data_source "HDPResourceManager" gear2.labs.teradata.com:8664
data_source "HDPHistoryServer" gear2.labs.teradata.com:8666
#data_source "HDPNimbus" gear2.labs.teradata.com:8649
#data_source "HDPSupervisor" gear2.labs.teradata.com:8650
{code}

> Ambari not configuring Ganglia/gmetad correctly, leading to log spam
> --------------------------------------------------------------------
>
>                 Key: AMBARI-5750
>                 URL: https://issues.apache.org/jira/browse/AMBARI-5750
>             Project: Ambari
>          Issue Type: Bug
>    Affects Versions: 1.6.0
>            Reporter: Sudhir Prakash
>
> I performed a 2.1 stack install via Ambari GUI of all components except for Storm. I noticed that on my primary master node, that /var/log/messages is being spammed by gmetad every few seconds. 
> {code}
> May 12 15:01:27 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPSupervisor] failed to contact node 39.0.8.1
> May 12 15:01:27 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPSupervisor] datasource
> May 12 15:01:28 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPTaskTracker] failed to contact node 39.0.8.1
> May 12 15:01:28 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPTaskTracker] datasource
> May 12 15:01:37 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPJobTracker] failed to contact node 39.0.8.1
> May 12 15:01:37 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPJobTracker] datasource
> May 12 15:01:37 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPFlumeServer] failed to contact node 39.0.8.1
> May 12 15:01:37 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPFlumeServer] datasource
> May 12 15:01:39 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPNimbus] failed to contact node 39.0.8.1
> May 12 15:01:39 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPNimbus] datasource
> May 12 15:01:42 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPSupervisor] failed to contact node 39.0.8.1
> May 12 15:01:42 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPSupervisor] datasource
> May 12 15:01:43 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPTaskTracker] failed to contact node 39.0.8.1
> May 12 15:01:43 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPTaskTracker] datasource
> May 12 15:01:52 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPJobTracker] failed to contact node 39.0.8.1
> May 12 15:01:52 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPJobTracker] datasource
> May 12 15:01:52 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPFlumeServer] failed to contact node 39.0.8.1
> May 12 15:01:52 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPFlumeServer] datasource
> May 12 15:01:55 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPNimbus] failed to contact node 39.0.8.1
> May 12 15:01:55 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPNimbus] datasource
> May 12 15:01:57 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPSupervisor] failed to contact node 39.0.8.1
> May 12 15:01:57 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPSupervisor] datasource
> May 12 15:01:58 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPTaskTracker] failed to contact node 39.0.8.1
> May 12 15:01:58 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPTaskTracker] datasource
> May 12 15:02:06 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPFlumeServer] failed to contact node 39.0.8.1
> May 12 15:02:06 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPFlumeServer] datasource
> May 12 15:02:07 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() for [HDPJobTracker] failed to contact node 39.0.8.1
> May 12 15:02:07 hadoopvm1-1 /usr/sbin/gmetad[26198]: data_thread() got no answer from any [HDPJobTracker] datasource{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)