You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Alexander Denissov (JIRA)" <ji...@apache.org> on 2014/08/26 23:15:58 UTC

[jira] [Created] (AMBARI-7023) Incorrect ATS metric request for non-HDP stack with version 2.1

Alexander Denissov created AMBARI-7023:
------------------------------------------

             Summary: Incorrect ATS metric request for non-HDP stack with version 2.1
                 Key: AMBARI-7023
                 URL: https://issues.apache.org/jira/browse/AMBARI-7023
             Project: Ambari
          Issue Type: Bug
    Affects Versions: 1.6.1
            Reporter: Alexander Denissov
            Priority: Critical


Define a non-HDP stack based on hadoop 2.2, such as PHD 2.1.0 with HDFS+YARN+ZK services.

After cluster deployment when a user presses "Complete" and UI tries to navigate to dashboard, a "Server Error" popup comes up with the message and the UI is stuck on loading bar of http://c6401.ambari.apache.org:8080/#/main/dashboard/metrics:

The popup has the following error message:

500 status code received on GET method for API: /api/v1/clusters/test/components/?ServiceComponentInfo/component_name=APP_TIMELINE_SERVER|ServiceComponentInfo/component_name=JOURNALNODE|ServiceComponentInfo/category=MASTER&fields=ServiceComponentInfo/Version,ServiceComponentInfo/StartTime,ServiceComponentInfo/HeapMemoryUsed,ServiceComponentInfo/HeapMemoryMax,ServiceComponentInfo/service_name,host_components/HostRoles/host_name,host_components/HostRoles/state,host_components/HostRoles/maintenance_state,host_components/HostRoles/stale_configs,host_components/metrics/jvm/memHeapUsedM,host_components/metrics/jvm/HeapMemoryMax,host_components/metrics/jvm/HeapMemoryUsed,host_components/metrics/jvm/memHeapCommittedM,host_components/metrics/mapred/jobtracker/trackers_decommissioned,host_components/metrics/cpu/cpu_wio,host_components/metrics/rpc/RpcQueueTime_avg_time,host_components/metrics/dfs/FSNamesystem/*,host_components/metrics/dfs/namenode/Version,host_components/metrics/dfs/namenode/DecomNodes,host_components/metrics/dfs/namenode/TotalFiles,host_components/metrics/dfs/namenode/UpgradeFinalized,host_components/metrics/dfs/namenode/Safemode,host_components/metrics/runtime/StartTime,host_components/metrics/yarn/Queue,ServiceComponentInfo/rm_metrics/cluster/activeNMcount,ServiceComponentInfo/rm_metrics/cluster/unhealthyNMcount,ServiceComponentInfo/rm_metrics/cluster/rebootedNMcount,ServiceComponentInfo/rm_metrics/cluster/decommissionedNMcount&minimal_response=true 

Error message: org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Could not find service for component, componentName=APP_TIMELINE_SERVER, clusterName=test, stackInfo=PHD-2.1.0 

The problem, I believe is in ambari-web/app/controllers/global/update_controller.js lines:

isATSInstalled = App.cache['services'].mapProperty('ServiceInfo.service_name').contains('YARN') && App.get('isHadoop21Stack'),
      flumeHandlerParam = isFlumeInstalled ? 'ServiceComponentInfo/component_name=FLUME_HANDLER|' : '',
      atsHandlerParam = isATSInstalled ? 'ServiceComponentInfo/component_name=APP_TIMELINE_SERVER|' : '',

and ambari-web/app/app.js lines:
isHadoop21Stack: function () {
    return (stringUtils.compareVersions(this.get('currentStackVersionNumber'), "2.1") === 1 ||
      stringUtils.compareVersions(this.get('currentStackVersionNumber'), "2.1") === 0)
  }.property('currentStackVersionNumber'),

Since the stack version number is 2.1 and YARN is installed, the UI assumes the stack is Hadoop 2.4 compatible (as is the case with HDP), which is not the case with non-HDP stacks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)