You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ambari.apache.org by Bryan Bende <bb...@gmail.com> on 2015/07/24 23:03:05 UTC

Posting Metrics to Ambari

I'm interested in sending metrics to Ambari and I've been looking at the
Metrics Collector REST API described here:
https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification

I figured the easiest way to test it would be to get the latest HDP
Sandbox... so I downloaded and started it up. The Metrics Collector service
wasn't running so I started it, and also added port 6188 to the VM port
forwarding. From there I used the example POST on the Wiki page and made a
successful POST which got a 200 response. After that I tried the query, but
could never get any results to come back.

I know this list is not specific to HDP, but I was wondering if anyone has
any suggestions as to what I can look at to figure out what is happening
with the data I am posting.

I was watching the metrics collector log while posting and querying and
didn't see any activity besides the periodic aggregation.

Any suggestions would be greatly appreciated.

Thanks,

Bran

Re: Posting Metrics to Ambari

Posted by Bryan Bende <bb...@gmail.com>.
Feel free to update as necessary:
https://issues.apache.org/jira/browse/AMBARI-12584

-Bryan

On Wed, Jul 29, 2015 at 3:38 PM, Siddharth Wagle <sw...@hortonworks.com>
wrote:

>  Hi Bryan,
>
>
>  Please go ahead and file a Jira. HBASE/Phoenix is case sensitive.
> Ideally we should retain the sensitivity meaning, if you POST lowercase you
> are expected to query in lowercase.
>
>
>  Will look in to the code and continue discussion on the Jira.
>
>
>  Regards,
>
> Sid
>
>
>  ------------------------------
> *From:* Bryan Bende <bb...@gmail.com>
> *Sent:* Wednesday, July 29, 2015 12:28 PM
>
> *To:* user@ambari.apache.org
> *Subject:* Re: Posting Metrics to Ambari
>
>  FWIW I was finally able to get this to work and the issue seems to be
> case sensitivity in the appId field...
>
>  Send a metric with APP_ID=*NIFI* then query:
>
> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=
> *NIFI*&hostname=localhost&startTime=1438193080000&endTime=1438193082000
> Gets 0 results.
>
>  Send a metric with APP_ID=*nifi* then query:
>
> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=
> *NIFI*&hostname=localhost&startTime=1438193080000&endTime=1438193082000
> Gets results, even though appId=NIFI in the query.
>
>  Would this be worthy of a jira? I would expect that if the query side is
> always going to search lower case, than the ingest side should be
> normalizing to lower case.
>
>  Thanks,
>
>  Bryan
>
>
> On Tue, Jul 28, 2015 at 1:07 PM, Bryan Bende <bb...@gmail.com> wrote:
>
>> As an update, I was able to create a new service and get it installed in
>> Ambari, and got a widget to display on the metrics panel for the service.
>>
>>  So now it seems like the missing piece is getting the metrics exposed
>> through the Ambari REST API, which may or may not be related to not getting
>> results from the collector service API. I have a metrics.json with the
>> following:
>>
>>   {
>>>   "NIFI_MASTER": {
>>>     "Component": [{
>>>         "type": "ganglia",
>>>         "metrics": {
>>>           "default": {
>>>             "metrics/nifi/FlowFilesReceivedLast5mins": {
>>>               "metric": "FlowFiles_Received_Last_5_mins",
>>>               "pointInTime": false,
>>>               "temporal": true
>>>             }
>>>           }
>>>         }
>>>     }]
>>>   }
>>> }
>>
>>
>>  and widgets.json with the following:
>>
>>  {
>>>   "layouts": [
>>>     {
>>>       "layout_name": "default_nifi_dashboard",
>>>       "display_name": "Standard NiFi Dashboard",
>>>       "section_name": "NIFI_SUMMARY",
>>>       "widgetLayoutInfo": [
>>>         {
>>>           "widget_name": "Flow Files Received Last 5 mins",
>>>           "description": "The number of flow files received in the last
>>> 5 minutes.",
>>>           "widget_type": "GRAPH",
>>>           "is_visible": true,
>>>           "metrics": [
>>>             {
>>>               "name": "FlowFiles_Received_Last_5_mins",
>>>               "metric_path": "metrics/nifi/FlowFilesReceivedLast5mins",
>>>               "service_name": "NIFI",
>>>               "component_name": "NIFI_MASTER"
>>>             }
>>>           ],
>>>           "values": [
>>>             {
>>>               "name": "Flow Files Received",
>>>               "value": "${FlowFiles_Received_Last_5_mins}"
>>>             }
>>>           ],
>>>           "properties": {
>>>             "display_unit": "%",
>>>             "graph_type": "LINE",
>>>             "time_range": "1"
>>>           }
>>>         }
>>>       ]
>>>     }
>>>   ]
>>> }
>>
>>
>>  Hitting this end-point doesn't show any metrics though:
>>
>> http://localhost:8080/api/v1/clusters/Sandbox/services/NIFI/components/NIFI_MASTER
>>
>>  -Bryan
>>
>>
>> On Tue, Jul 28, 2015 at 9:39 AM, Bryan Bende <bb...@gmail.com> wrote:
>>
>>> The data is present in the aggregate tables...
>>>
>>>  0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from *METRIC_RECORD*
>>> WHERE METRIC_NAME = 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME
>>> desc limit 10;
>>>
>>>
>>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>>
>>> |               METRIC_NAME                |                 HOSTNAME
>>>               |               SERVER_TIME                |
>>>     APP_ID                  |               INSTANCE_ID
>>> |
>>>
>>>
>>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>>
>>> | FlowFiles_Received_Last_5_mins           | localhost
>>>         | 1438047369541                            | NIFI
>>>                       | 5dbaaa80-0760-4241-80aa-b00b52f8efb4      |
>>>
>>>  0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from
>>> *METRIC_RECORD_MINUTE* WHERE METRIC_NAME =
>>> 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc limit 10;
>>>
>>>
>>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>>
>>> |               METRIC_NAME                |                 HOSTNAME
>>>               |                  APP_ID                  |
>>>   INSTANCE_ID                |               SERVER_TIME
>>> |
>>>
>>>
>>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>>
>>> | FlowFiles_Received_Last_5_mins           | localhost
>>>         | NIFI                                     | 5dbaaa80-0760-4241-80aa-b00b52f8efb4
>>>     | 1438047369541                             |
>>>
>>>
>>>  0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from
>>> *METRIC_RECORD_HOURLY* WHERE METRIC_NAME =
>>> 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc limit 10;
>>>
>>>
>>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>>
>>> |               METRIC_NAME                |                 HOSTNAME
>>>               |                  APP_ID                  |
>>>   INSTANCE_ID                |               SERVER_TIME
>>> |
>>>
>>>
>>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>>
>>> | FlowFiles_Received_Last_5_mins           | localhost
>>>         | NIFI                                     | 5dbaaa80-0760-4241-80aa-b00b52f8efb4
>>>     | 1438045569276                             |
>>>
>>>
>>>  Trying a smaller time range (2 mins surrounding the timestamp from the
>>> first record above)....
>>>
>>>
>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000
>>>
>>>
>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=seconds
>>>
>>>
>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=minutes
>>>
>>>
>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=hours
>>>
>>>
>>>  Those all get no results. The only time I got a difference response,
>>> was this example:
>>>
>>>
>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=143804556927&endTime=1438047420000
>>>
>>> which returned:
>>>
>>> {"exception":"BadRequestException","message":"java.lang.Exception: The time range query for precision table exceeds row count limit, please query aggregate table instead.","javaClassName":"org.apache.hadoop.yarn.webapp.BadRequestException"}
>>>
>>>
>>> On Mon, Jul 27, 2015 at 10:50 PM, Siddharth Wagle <
>>> swagle@hortonworks.com> wrote:
>>>
>>>>  For Step1, when you say exposing metrics through the Ambari REST
>>>> API... are you talking about the metrics collector REST API, or through the
>>>> Ambari Server REST API?
>>>>
>>>> Answer: Ambari REST API: Note that this is intended use because this is
>>>> what ties the metrics to you your cluster resources, example: You can query
>>>> for say give me metrics for the active Namenode only using Ambari's API.
>>>>
>>>>
>>>>
>>>>  Is SERVER_TIME the field that has to fall between startTime and
>>>> endTime?
>>>>
>>>> Yes. That is correct
>>>>
>>>>
>>>>  There is nothing special about the query  you seem to have the
>>>> fragments right, only this is you are query for a large time window,
>>>> AMS would not return data from METRIC_RECORD table for a such a large time
>>>> window it would try to find this in the aggregate
>>>> table, METRIC_RECORD_MINUTE or HOURLY. Try reducing you time range,
>>>> also check the aggregate tables, the data should still be present in those
>>>> tables.
>>>>
>>>>
>>>>  Precision params:
>>>>
>>>>
>>>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>>>>
>>>>
>>>>  -Sid
>>>>
>>>>
>>>>  ------------------------------
>>>> *From:* Bryan Bende <bb...@gmail.com>
>>>> *Sent:* Monday, July 27, 2015 6:21 PM
>>>>
>>>> *To:* user@ambari.apache.org
>>>> *Subject:* Re: Posting Metrics to Ambari
>>>>
>>>>   Hi Jaimin,
>>>>
>>>>  For Step1, when you say exposing metrics through the Ambari REST
>>>> API... are you talking about the metrics collector REST API, or through the
>>>> Ambari Server REST API?
>>>>
>>>>  I am able to see data through Phoenix, as an example:
>>>>
>>>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+
>>>>
>>>> |               METRIC_NAME                |                 HOSTNAME
>>>>                 |               SERVER_TIME                |
>>>>         APP_ID                  |
>>>>
>>>>
>>>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+
>>>>
>>>> | FlowFiles_Received_Last_5_mins           | localhost
>>>>           | 1438045869329                            | NIFI
>>>>                           |
>>>>
>>>>
>>>>  Then I try to use this API call:
>>>>
>>>>
>>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1437870332000&endTime=1438129532000
>>>>
>>>> and I get: {"metrics":[]}
>>>>
>>>> Something must not be lining up with what I am sending over. Is
>>>> SERVER_TIME the field that has to fall between startTime and endTime?
>>>>
>>>> -Bryan
>>>>
>>>> On Mon, Jul 27, 2015 at 1:40 PM, Jaimin Jetly <ja...@hortonworks.com>
>>>> wrote:
>>>>
>>>>>  Hi Bryan,
>>>>>
>>>>>
>>>>>  There are 2 steps in this that needs to be achieved.
>>>>>
>>>>>
>>>>>  STEP-1:  Exposing service metrics successfully through Ambari
>>>>> REST API
>>>>>
>>>>> STEP-2:  Ambari UI displaying widgets comprised from newly exposed
>>>>> metrics via Ambari server.
>>>>>
>>>>>
>>>>>
>>>>>  As step-1 is pre-requisite to step-2, can you confirm that you were
>>>>> able to achieve step-1 (exposing service metrics successfully through
>>>>> Ambari REST API) ?
>>>>>
>>>>>
>>>>>  *NOTE:*
>>>>> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json
>>>>> are the metrics specific to Ambari metrics service. If the new
>>>>> metrics that you want to expose are related to any other service then
>>>>> please edit/create metrics.json file in that specific service package and
>>>>> not in Ambari metrics service package. widgets.json also needs to be
>>>>> changed/added in the same service package and not at
>>>>> /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json
>>>>> (unless you want to add system heatmaps for a stack that inherits HDP-2.0.6
>>>>> stack).
>>>>>
>>>>>
>>>>>
>>>>>  -- Thanks
>>>>>
>>>>>     Jaimin
>>>>>  ------------------------------
>>>>> *From:* Bryan Bende <bb...@gmail.com>
>>>>> *Sent:* Sunday, July 26, 2015 2:10 PM
>>>>>
>>>>> *To:* user@ambari.apache.org
>>>>> *Subject:* Re: Posting Metrics to Ambari
>>>>>
>>>>>
>>>>> Hi Sid,
>>>>>
>>>>> Thanks for the pointers about how to add a metric to the UI. Based on
>>>>> those instructions I modified
>>>>> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json
>>>>> and added the following based on the test metrics I posted:
>>>>>
>>>>> "metrics/SmokeTest/FakeMetric": {
>>>>>
>>>>>               "metric": "AMBARI_METRICS.SmokeTest.FakeMetric",
>>>>>
>>>>>               "pointInTime": true,
>>>>>
>>>>>               "temporal": true
>>>>>
>>>>>             }
>>>>>
>>>>> From digging around the filesystem there appears to be a widgets.json
>>>>> in /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json. It looks
>>>>> like this file only contained the definitions of the heatmaps, so I wasn't
>>>>> sure if this was the right place, but just to see what happened I modified
>>>>> it as follows:
>>>>>
>>>>> 1) Added a whole new layout:
>>>>>
>>>>> http://pastebin.com/KqeT8xfe
>>>>>
>>>>> 2) Added a heatmap for the test metric:
>>>>>
>>>>> http://pastebin.com/AQDT7u6v
>>>>>
>>>>> Then I restarted the HDP VM but I don't see anything in the UI under
>>>>> Metric Actions -> Add, or under Heatmaps. Anything that seems completely
>>>>> wrong about what I did? Maybe I should be going down the route of defining
>>>>> a new service type for system I will be sending metrics from?
>>>>>
>>>>> Sorry to keep bothering with all these questions, I just don't have
>>>>> any previous experience with Ambari.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Bryan
>>>>>
>>>>> On Sun, Jul 26, 2015 at 12:10 AM, Siddharth Wagle <
>>>>> swagle@hortonworks.com> wrote:
>>>>>
>>>>>>  The AMS API does not allow open ended queries so startTime and
>>>>>> endTime are required fields, the curl call should return the error code
>>>>>> with the apt response.
>>>>>>
>>>>>>
>>>>>>  If this doesn't happen please go ahead and file a Jira.
>>>>>>
>>>>>>
>>>>>>  Using AMS through Ambari UI after getting the plumbing work with
>>>>>> metrics.json completed would be much easier. The AMS API does need some
>>>>>> refinement. Jiras / Bugs are welcome.
>>>>>>
>>>>>>
>>>>>>  -Sid
>>>>>>
>>>>>>
>>>>>>
>>>>>>  ------------------------------
>>>>>> *From:* Siddharth Wagle <sw...@hortonworks.com>
>>>>>> *Sent:* Saturday, July 25, 2015 9:01 PM
>>>>>>
>>>>>> *To:* user@ambari.apache.org
>>>>>> *Subject:* Re: Posting Metrics to Ambari
>>>>>>
>>>>>>
>>>>>> No dev work need only need to modify metrics.json file and then add
>>>>>> widget from UI.
>>>>>>
>>>>>>
>>>>>>  Stack details:
>>>>>>
>>>>>>
>>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics
>>>>>>
>>>>>>
>>>>>>  UI specifics:
>>>>>>
>>>>>>
>>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard
>>>>>>
>>>>>>
>>>>>>  -Sid
>>>>>>
>>>>>>
>>>>>>  ------------------------------
>>>>>> *From:* Bryan Bende <bb...@gmail.com>
>>>>>> *Sent:* Saturday, July 25, 2015 7:10 PM
>>>>>> *To:* user@ambari.apache.org
>>>>>> *Subject:* Re: Posting Metrics to Ambari
>>>>>>
>>>>>>  Quick update, I was able to connect with the phoenix 4.2.2 client
>>>>>> and I did get results querying with:
>>>>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME =
>>>>>> 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;
>>>>>>
>>>>>>  Now that I know the metrics are posting, I am less concerned about
>>>>>> querying through the REST API.
>>>>>>
>>>>>>  Is there any way to get a custom metric added to the main page of
>>>>>> Ambari? or does this require development work?
>>>>>>
>>>>>>  Thanks,
>>>>>>
>>>>>>  Bryan
>>>>>>
>>>>>> On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Sid,
>>>>>>>
>>>>>>>  Thanks for the suggestions. I turned on DEBUG for the metrics
>>>>>>> collector (had to do this through the Ambari UI configs section) and now I
>>>>>>> can see some activity... When I post a metric I see:
>>>>>>>
>>>>>>>  01:30:18,372 DEBUG [95266635@qtp-171166092-2 -
>>>>>>> /ws/v1/timeline/metrics] TimelineWebServices:270 - Storing metrics: {
>>>>>>>
>>>>>>>   "metrics" : [ {
>>>>>>>
>>>>>>>     "timestamp" : 1432075898000,
>>>>>>>
>>>>>>>     "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",
>>>>>>>
>>>>>>>     "appid" : "amssmoketestfake",
>>>>>>>
>>>>>>>     "hostname" : "localhost",
>>>>>>>
>>>>>>>     "starttime" : 1432075898000,
>>>>>>>
>>>>>>>     "metrics" : {
>>>>>>>
>>>>>>>       "1432075898000" : 0.963781711428,
>>>>>>>
>>>>>>>       "1432075899000" : 1.432075898E12
>>>>>>>
>>>>>>>     }
>>>>>>>
>>>>>>>   } ]
>>>>>>>
>>>>>>> }
>>>>>>>
>>>>>>> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 -
>>>>>>> /ws/v1/timeline/metrics] DefaultPhoenixDataSource:67 - Metric store
>>>>>>> connection url: jdbc:phoenix:localhost:61181:/hbase
>>>>>>>
>>>>>>> 01:30:18,376 DEBUG [95266635@qtp-171166092-2 -
>>>>>>> /ws/v1/timeline/metrics] MutationState:361 - Sending 2 mutations for
>>>>>>> METRIC_RECORD with 8 key values of total size 925 bytes
>>>>>>>
>>>>>>> 01:30:18,380 DEBUG [95266635@qtp-171166092-2 -
>>>>>>> /ws/v1/timeline/metrics] MutationState:436 - Total time for batch call of
>>>>>>> 2 mutations into METRIC_RECORD: 3 ms
>>>>>>>
>>>>>>> 01:30:18,381 DEBUG [95266635@qtp-171166092-2 -
>>>>>>> /ws/v1/timeline/metrics] log:40 - RESPONSE /ws/v1/timeline/metrics  200
>>>>>>>
>>>>>>>
>>>>>>>  So it looks like it posted successfully. Then I hit:
>>>>>>>
>>>>>>>
>>>>>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric
>>>>>>>
>>>>>>> and I see...
>>>>>>>
>>>>>>> 01:31:16,952 DEBUG [95266635@qtp-171166092-2 -
>>>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>>>> ParallelIterators:412 - Guideposts: ]
>>>>>>>
>>>>>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>>>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>>>> ParallelIterators:481 - The parallelScans:
>>>>>>> [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]
>>>>>>>
>>>>>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>>>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>>>> BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1,
>>>>>>> count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]
>>>>>>>
>>>>>>> 01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 -
>>>>>>> Id: d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan:
>>>>>>> {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}
>>>>>>>
>>>>>>> 01:31:16,959 DEBUG [95266635@qtp-171166092-2 -
>>>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>>>> PhoenixHBaseAccessor:552 - Aggregate records size: 0
>>>>>>>
>>>>>>> I'll see if I can get the phoenix client working and see what that
>>>>>>> returns.
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Bryan
>>>>>>>
>>>>>>> On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <
>>>>>>> swagle@hortonworks.com> wrote:
>>>>>>>
>>>>>>>>  Hi Bryan,
>>>>>>>>
>>>>>>>>
>>>>>>>>  Few things you can do:
>>>>>>>>
>>>>>>>>
>>>>>>>>  1. Turn on DEBUG mode by changing log4j.properties at,
>>>>>>>> /etc/ambari-metrics-collector/conf/
>>>>>>>>
>>>>>>>> This might reveal more info, I don't think we print every metrics
>>>>>>>> received to the log in 2.0 or 2.1, I did add this option if TRACE is
>>>>>>>> enabled to trunk recently.
>>>>>>>>
>>>>>>>>
>>>>>>>>  2. Connect using Phoenix directly and you can do a SELECT query
>>>>>>>> like this:
>>>>>>>>
>>>>>>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME =
>>>>>>>> '<your-metric-name>' order by SERVER_TIME desc limit 10;
>>>>>>>>
>>>>>>>>
>>>>>>>>  Instructions for connecting to Phoenix:
>>>>>>>>
>>>>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema
>>>>>>>>
>>>>>>>>
>>>>>>>>  3. What API call are you making to get metrics?
>>>>>>>>
>>>>>>>> E.g.: http://
>>>>>>>> <ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>
>>>>>>>>
>>>>>>>>
>>>>>>>>  -Sid
>>>>>>>>
>>>>>>>>
>>>>>>>>  ------------------------------
>>>>>>>> *From:* Bryan Bende <bb...@gmail.com>
>>>>>>>> *Sent:* Friday, July 24, 2015 2:03 PM
>>>>>>>> *To:* user@ambari.apache.org
>>>>>>>> *Subject:* Posting Metrics to Ambari
>>>>>>>>
>>>>>>>>   I'm interested in sending metrics to Ambari and I've been
>>>>>>>> looking at the Metrics Collector REST API described here:
>>>>>>>>
>>>>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>>>>>>>>
>>>>>>>>  I figured the easiest way to test it would be to get the latest
>>>>>>>> HDP Sandbox... so I downloaded and started it up. The Metrics Collector
>>>>>>>> service wasn't running so I started it, and also added port 6188 to the VM
>>>>>>>> port forwarding. From there I used the example POST on the Wiki page and
>>>>>>>> made a successful POST which got a 200 response. After that I tried the
>>>>>>>> query, but could never get any results to come back.
>>>>>>>>
>>>>>>>>  I know this list is not specific to HDP, but I was wondering if
>>>>>>>> anyone has any suggestions as to what I can look at to figure out what is
>>>>>>>> happening with the data I am posting.
>>>>>>>>
>>>>>>>>  I was watching the metrics collector log while posting and
>>>>>>>> querying and didn't see any activity besides the periodic aggregation.
>>>>>>>>
>>>>>>>>  Any suggestions would be greatly appreciated.
>>>>>>>>
>>>>>>>>  Thanks,
>>>>>>>>
>>>>>>>>  Bran
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Posting Metrics to Ambari

Posted by Siddharth Wagle <sw...@hortonworks.com>.
Hi Bryan,


Please go ahead and file a Jira. HBASE/Phoenix is case sensitive. Ideally we should retain the sensitivity meaning, if you POST lowercase you are expected to query in lowercase.


Will look in to the code and continue discussion on the Jira.


Regards,

Sid


________________________________
From: Bryan Bende <bb...@gmail.com>
Sent: Wednesday, July 29, 2015 12:28 PM
To: user@ambari.apache.org
Subject: Re: Posting Metrics to Ambari

FWIW I was finally able to get this to work and the issue seems to be case sensitivity in the appId field...

Send a metric with APP_ID=NIFI then query:
http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438193080000&endTime=1438193082000
Gets 0 results.

Send a metric with APP_ID=nifi then query:
http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438193080000&endTime=1438193082000
Gets results, even though appId=NIFI in the query.

Would this be worthy of a jira? I would expect that if the query side is always going to search lower case, than the ingest side should be normalizing to lower case.

Thanks,

Bryan


On Tue, Jul 28, 2015 at 1:07 PM, Bryan Bende <bb...@gmail.com>> wrote:
As an update, I was able to create a new service and get it installed in Ambari, and got a widget to display on the metrics panel for the service.

So now it seems like the missing piece is getting the metrics exposed through the Ambari REST API, which may or may not be related to not getting results from the collector service API. I have a metrics.json with the following:

{
  "NIFI_MASTER": {
    "Component": [{
        "type": "ganglia",
        "metrics": {
          "default": {
            "metrics/nifi/FlowFilesReceivedLast5mins": {
              "metric": "FlowFiles_Received_Last_5_mins",
              "pointInTime": false,
              "temporal": true
            }
          }
        }
    }]
  }
}

and widgets.json with the following:

{
  "layouts": [
    {
      "layout_name": "default_nifi_dashboard",
      "display_name": "Standard NiFi Dashboard",
      "section_name": "NIFI_SUMMARY",
      "widgetLayoutInfo": [
        {
          "widget_name": "Flow Files Received Last 5 mins",
          "description": "The number of flow files received in the last 5 minutes.",
          "widget_type": "GRAPH",
          "is_visible": true,
          "metrics": [
            {
              "name": "FlowFiles_Received_Last_5_mins",
              "metric_path": "metrics/nifi/FlowFilesReceivedLast5mins",
              "service_name": "NIFI",
              "component_name": "NIFI_MASTER"
            }
          ],
          "values": [
            {
              "name": "Flow Files Received",
              "value": "${FlowFiles_Received_Last_5_mins}"
            }
          ],
          "properties": {
            "display_unit": "%",
            "graph_type": "LINE",
            "time_range": "1"
          }
        }
      ]
    }
  ]
}

Hitting this end-point doesn't show any metrics though:
http://localhost:8080/api/v1/clusters/Sandbox/services/NIFI/components/NIFI_MASTER

-Bryan


On Tue, Jul 28, 2015 at 9:39 AM, Bryan Bende <bb...@gmail.com>> wrote:
The data is present in the aggregate tables...


0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from METRIC_RECORD WHERE METRIC_NAME = 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc limit 10;

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

|               METRIC_NAME                |                 HOSTNAME                 |               SERVER_TIME                |                  APP_ID                  |               INSTANCE_ID                 |

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

| FlowFiles_Received_Last_5_mins           | localhost                          | 1438047369541                            | NIFI                                     | 5dbaaa80-0760-4241-80aa-b00b52f8efb4      |


0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from METRIC_RECORD_MINUTE WHERE METRIC_NAME = 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc limit 10;

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

|               METRIC_NAME                |                 HOSTNAME                 |                  APP_ID                  |               INSTANCE_ID                |               SERVER_TIME                 |

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

| FlowFiles_Received_Last_5_mins           | localhost                          | NIFI                                     | 5dbaaa80-0760-4241-80aa-b00b52f8efb4     | 1438047369541                             |


0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from METRIC_RECORD_HOURLY WHERE METRIC_NAME = 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc limit 10;

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

|               METRIC_NAME                |                 HOSTNAME                 |                  APP_ID                  |               INSTANCE_ID                |               SERVER_TIME                 |

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

| FlowFiles_Received_Last_5_mins           | localhost                          | NIFI                                     | 5dbaaa80-0760-4241-80aa-b00b52f8efb4     | 1438045569276                             |


Trying a smaller time range (2 mins surrounding the timestamp from the first record above)....

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=seconds

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=minutes

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=hours


Those all get no results. The only time I got a difference response, was this example:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=143804556927&endTime=1438047420000

which returned:

{"exception":"BadRequestException","message":"java.lang.Exception: The time range query for precision table exceeds row count limit, please query aggregate table instead.","javaClassName":"org.apache.hadoop.yarn.webapp.BadRequestException"}

On Mon, Jul 27, 2015 at 10:50 PM, Siddharth Wagle <sw...@hortonworks.com>> wrote:

For Step1, when you say exposing metrics through the Ambari REST API... are you talking about the metrics collector REST API, or through the Ambari Server REST API?

Answer: Ambari REST API: Note that this is intended use because this is what ties the metrics to you your cluster resources, example: You can query for say give me metrics for the active Namenode only using Ambari's API.



Is SERVER_TIME the field that has to fall between startTime and endTime?

Yes. That is correct


There is nothing special about the query  you seem to have the fragments right, only this is you are query for a large time window, AMS would not return data from METRIC_RECORD table for a such a large time window it would try to find this in the aggregate table, METRIC_RECORD_MINUTE or HOURLY. Try reducing you time range, also check the aggregate tables, the data should still be present in those tables.


Precision params:

https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>>
Sent: Monday, July 27, 2015 6:21 PM

To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Posting Metrics to Ambari

Hi Jaimin,

For Step1, when you say exposing metrics through the Ambari REST API... are you talking about the metrics collector REST API, or through the Ambari Server REST API?

I am able to see data through Phoenix, as an example:
+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+

|               METRIC_NAME                |                 HOSTNAME                 |               SERVER_TIME                |                  APP_ID                  |

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+

| FlowFiles_Received_Last_5_mins           | localhost                          | 1438045869329                            | NIFI                                     |


Then I try to use this API call:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1437870332000&endTime=1438129532000

and I get: {"metrics":[]}

Something must not be lining up with what I am sending over. Is SERVER_TIME the field that has to fall between startTime and endTime?

-Bryan

On Mon, Jul 27, 2015 at 1:40 PM, Jaimin Jetly <ja...@hortonworks.com>> wrote:

Hi Bryan,


There are 2 steps in this that needs to be achieved.


STEP-1:  Exposing service metrics successfully through Ambari REST API

STEP-2:  Ambari UI displaying widgets comprised from newly exposed metrics via Ambari server.



As step-1 is pre-requisite to step-2, can you confirm that you were able to achieve step-1 (exposing service metrics successfully through Ambari REST API) ?


NOTE: /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json are the metrics specific to Ambari metrics service. If the new metrics that you want to expose are related to any other service then please edit/create metrics.json file in that specific service package and not in Ambari metrics service package. widgets.json also needs to be changed/added in the same service package and not at /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json (unless you want to add system heatmaps for a stack that inherits HDP-2.0.6 stack).



-- Thanks

    Jaimin

________________________________
From: Bryan Bende <bb...@gmail.com>>
Sent: Sunday, July 26, 2015 2:10 PM

To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Posting Metrics to Ambari


Hi Sid,

Thanks for the pointers about how to add a metric to the UI. Based on those instructions I modified /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json and added the following based on the test metrics I posted:

"metrics/SmokeTest/FakeMetric": {

              "metric": "AMBARI_METRICS.SmokeTest.FakeMetric",

              "pointInTime": true,

              "temporal": true

            }

>From digging around the filesystem there appears to be a widgets.json in /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json. It looks like this file only contained the definitions of the heatmaps, so I wasn't sure if this was the right place, but just to see what happened I modified it as follows:

1) Added a whole new layout:

http://pastebin.com/KqeT8xfe

2) Added a heatmap for the test metric:

http://pastebin.com/AQDT7u6v

Then I restarted the HDP VM but I don't see anything in the UI under Metric Actions -> Add, or under Heatmaps. Anything that seems completely wrong about what I did? Maybe I should be going down the route of defining a new service type for system I will be sending metrics from?

Sorry to keep bothering with all these questions, I just don't have any previous experience with Ambari.

Thanks,

Bryan

On Sun, Jul 26, 2015 at 12:10 AM, Siddharth Wagle <sw...@hortonworks.com>> wrote:

The AMS API does not allow open ended queries so startTime and endTime are required fields, the curl call should return the error code with the apt response.


If this doesn't happen please go ahead and file a Jira.


Using AMS through Ambari UI after getting the plumbing work with metrics.json completed would be much easier. The AMS API does need some refinement. Jiras / Bugs are welcome.


-Sid



________________________________
From: Siddharth Wagle <sw...@hortonworks.com>>
Sent: Saturday, July 25, 2015 9:01 PM

To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Posting Metrics to Ambari


No dev work need only need to modify metrics.json file and then add widget from UI.


Stack details:

https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics


UI specifics:

https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>>
Sent: Saturday, July 25, 2015 7:10 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Posting Metrics to Ambari

Quick update, I was able to connect with the phoenix 4.2.2 client and I did get results querying with:
SELECT * from METRIC_RECORD WHERE METRIC_NAME = 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;

Now that I know the metrics are posting, I am less concerned about querying through the REST API.

Is there any way to get a custom metric added to the main page of Ambari? or does this require development work?

Thanks,

Bryan

On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com>> wrote:
Hi Sid,

Thanks for the suggestions. I turned on DEBUG for the metrics collector (had to do this through the Ambari UI configs section) and now I can see some activity... When I post a metric I see:


01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] TimelineWebServices:270 - Storing metrics: {

  "metrics" : [ {

    "timestamp" : 1432075898000,

    "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",

    "appid" : "amssmoketestfake",

    "hostname" : "localhost",

    "starttime" : 1432075898000,

    "metrics" : {

      "1432075898000" : 0.963781711428,

      "1432075899000" : 1.432075898E12

    }

  } ]

}

01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] DefaultPhoenixDataSource:67 - Metric store connection url: jdbc:phoenix:localhost:61181:/hbase

01:30:18,376 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] MutationState:361 - Sending 2 mutations for METRIC_RECORD with 8 key values of total size 925 bytes

01:30:18,380 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] MutationState:436 - Total time for batch call of  2 mutations into METRIC_RECORD: 3 ms

01:30:18,381 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] log:40 - RESPONSE /ws/v1/timeline/metrics  200


So it looks like it posted successfully. Then I hit:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric

and I see...

01:31:16,952 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:412 - Guideposts: ]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:481 - The parallelScans: [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1, count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]

01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id: d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan: {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}

01:31:16,959 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] PhoenixHBaseAccessor:552 - Aggregate records size: 0

I'll see if I can get the phoenix client working and see what that returns.

Thanks,

Bryan

On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <sw...@hortonworks.com>> wrote:

Hi Bryan,


Few things you can do:


1. Turn on DEBUG mode by changing log4j.properties at, /etc/ambari-metrics-collector/conf/

This might reveal more info, I don't think we print every metrics received to the log in 2.0 or 2.1, I did add this option if TRACE is enabled to trunk recently.


2. Connect using Phoenix directly and you can do a SELECT query like this:

SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>' order by SERVER_TIME desc limit 10;


Instructions for connecting to Phoenix:

https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema


3. What API call are you making to get metrics?

E.g.: http://<ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>>
Sent: Friday, July 24, 2015 2:03 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Posting Metrics to Ambari

I'm interested in sending metrics to Ambari and I've been looking at the Metrics Collector REST API described here:
https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification

I figured the easiest way to test it would be to get the latest HDP Sandbox... so I downloaded and started it up. The Metrics Collector service wasn't running so I started it, and also added port 6188 to the VM port forwarding. From there I used the example POST on the Wiki page and made a successful POST which got a 200 response. After that I tried the query, but could never get any results to come back.

I know this list is not specific to HDP, but I was wondering if anyone has any suggestions as to what I can look at to figure out what is happening with the data I am posting.

I was watching the metrics collector log while posting and querying and didn't see any activity besides the periodic aggregation.

Any suggestions would be greatly appreciated.

Thanks,

Bran








Re: Posting Metrics to Ambari

Posted by Bryan Bende <bb...@gmail.com>.
FWIW I was finally able to get this to work and the issue seems to be case
sensitivity in the appId field...

Send a metric with APP_ID=*NIFI* then query:
http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=
*NIFI*&hostname=localhost&startTime=1438193080000&endTime=1438193082000
Gets 0 results.

Send a metric with APP_ID=*nifi* then query:
http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=
*NIFI*&hostname=localhost&startTime=1438193080000&endTime=1438193082000
Gets results, even though appId=NIFI in the query.

Would this be worthy of a jira? I would expect that if the query side is
always going to search lower case, than the ingest side should be
normalizing to lower case.

Thanks,

Bryan


On Tue, Jul 28, 2015 at 1:07 PM, Bryan Bende <bb...@gmail.com> wrote:

> As an update, I was able to create a new service and get it installed in
> Ambari, and got a widget to display on the metrics panel for the service.
>
> So now it seems like the missing piece is getting the metrics exposed
> through the Ambari REST API, which may or may not be related to not getting
> results from the collector service API. I have a metrics.json with the
> following:
>
> {
>>   "NIFI_MASTER": {
>>     "Component": [{
>>         "type": "ganglia",
>>         "metrics": {
>>           "default": {
>>             "metrics/nifi/FlowFilesReceivedLast5mins": {
>>               "metric": "FlowFiles_Received_Last_5_mins",
>>               "pointInTime": false,
>>               "temporal": true
>>             }
>>           }
>>         }
>>     }]
>>   }
>> }
>
>
> and widgets.json with the following:
>
> {
>>   "layouts": [
>>     {
>>       "layout_name": "default_nifi_dashboard",
>>       "display_name": "Standard NiFi Dashboard",
>>       "section_name": "NIFI_SUMMARY",
>>       "widgetLayoutInfo": [
>>         {
>>           "widget_name": "Flow Files Received Last 5 mins",
>>           "description": "The number of flow files received in the last 5
>> minutes.",
>>           "widget_type": "GRAPH",
>>           "is_visible": true,
>>           "metrics": [
>>             {
>>               "name": "FlowFiles_Received_Last_5_mins",
>>               "metric_path": "metrics/nifi/FlowFilesReceivedLast5mins",
>>               "service_name": "NIFI",
>>               "component_name": "NIFI_MASTER"
>>             }
>>           ],
>>           "values": [
>>             {
>>               "name": "Flow Files Received",
>>               "value": "${FlowFiles_Received_Last_5_mins}"
>>             }
>>           ],
>>           "properties": {
>>             "display_unit": "%",
>>             "graph_type": "LINE",
>>             "time_range": "1"
>>           }
>>         }
>>       ]
>>     }
>>   ]
>> }
>
>
> Hitting this end-point doesn't show any metrics though:
>
> http://localhost:8080/api/v1/clusters/Sandbox/services/NIFI/components/NIFI_MASTER
>
> -Bryan
>
>
> On Tue, Jul 28, 2015 at 9:39 AM, Bryan Bende <bb...@gmail.com> wrote:
>
>> The data is present in the aggregate tables...
>>
>> 0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from *METRIC_RECORD*
>> WHERE METRIC_NAME = 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME
>> desc limit 10;
>>
>>
>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>
>> |               METRIC_NAME                |                 HOSTNAME
>>               |               SERVER_TIME                |
>>     APP_ID                  |               INSTANCE_ID                 |
>>
>>
>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>
>> | FlowFiles_Received_Last_5_mins           | localhost
>>         | 1438047369541                            | NIFI
>>                       | 5dbaaa80-0760-4241-80aa-b00b52f8efb4      |
>>
>> 0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from
>> *METRIC_RECORD_MINUTE* WHERE METRIC_NAME =
>> 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc limit 10;
>>
>>
>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>
>> |               METRIC_NAME                |                 HOSTNAME
>>               |                  APP_ID                  |
>> INSTANCE_ID                |               SERVER_TIME                 |
>>
>>
>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>
>> | FlowFiles_Received_Last_5_mins           | localhost
>>         | NIFI                                     | 5dbaaa80-0760-4241-80aa-b00b52f8efb4
>>     | 1438047369541                             |
>>
>>
>> 0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from
>> *METRIC_RECORD_HOURLY* WHERE METRIC_NAME =
>> 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc limit 10;
>>
>>
>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>
>> |               METRIC_NAME                |                 HOSTNAME
>>               |                  APP_ID                  |
>> INSTANCE_ID                |               SERVER_TIME                 |
>>
>>
>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>>
>> | FlowFiles_Received_Last_5_mins           | localhost
>>         | NIFI                                     | 5dbaaa80-0760-4241-80aa-b00b52f8efb4
>>     | 1438045569276                             |
>>
>>
>> Trying a smaller time range (2 mins surrounding the timestamp from the
>> first record above)....
>>
>>
>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000
>>
>>
>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=seconds
>>
>>
>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=minutes
>>
>>
>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=hours
>>
>>
>> Those all get no results. The only time I got a difference response, was
>> this example:
>>
>>
>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=143804556927&endTime=1438047420000
>>
>> which returned:
>>
>> {"exception":"BadRequestException","message":"java.lang.Exception: The time range query for precision table exceeds row count limit, please query aggregate table instead.","javaClassName":"org.apache.hadoop.yarn.webapp.BadRequestException"}
>>
>>
>> On Mon, Jul 27, 2015 at 10:50 PM, Siddharth Wagle <swagle@hortonworks.com
>> > wrote:
>>
>>>  For Step1, when you say exposing metrics through the Ambari REST
>>> API... are you talking about the metrics collector REST API, or through the
>>> Ambari Server REST API?
>>>
>>> Answer: Ambari REST API: Note that this is intended use because this is
>>> what ties the metrics to you your cluster resources, example: You can query
>>> for say give me metrics for the active Namenode only using Ambari's API.
>>>
>>>
>>>
>>>  Is SERVER_TIME the field that has to fall between startTime and
>>> endTime?
>>>
>>> Yes. That is correct
>>>
>>>
>>>  There is nothing special about the query  you seem to have the
>>> fragments right, only this is you are query for a large time window,
>>> AMS would not return data from METRIC_RECORD table for a such a large time
>>> window it would try to find this in the aggregate
>>> table, METRIC_RECORD_MINUTE or HOURLY. Try reducing you time range,
>>> also check the aggregate tables, the data should still be present in those
>>> tables.
>>>
>>>
>>>  Precision params:
>>>
>>>
>>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>>>
>>>
>>>  -Sid
>>>
>>>
>>>  ------------------------------
>>> *From:* Bryan Bende <bb...@gmail.com>
>>> *Sent:* Monday, July 27, 2015 6:21 PM
>>>
>>> *To:* user@ambari.apache.org
>>> *Subject:* Re: Posting Metrics to Ambari
>>>
>>>  Hi Jaimin,
>>>
>>>  For Step1, when you say exposing metrics through the Ambari REST
>>> API... are you talking about the metrics collector REST API, or through the
>>> Ambari Server REST API?
>>>
>>>  I am able to see data through Phoenix, as an example:
>>>
>>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+
>>>
>>> |               METRIC_NAME                |                 HOSTNAME
>>>               |               SERVER_TIME                |
>>>     APP_ID                  |
>>>
>>>
>>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+
>>>
>>> | FlowFiles_Received_Last_5_mins           | localhost
>>>         | 1438045869329                            | NIFI
>>>                       |
>>>
>>>
>>>  Then I try to use this API call:
>>>
>>>
>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1437870332000&endTime=1438129532000
>>>
>>> and I get: {"metrics":[]}
>>>
>>> Something must not be lining up with what I am sending over. Is
>>> SERVER_TIME the field that has to fall between startTime and endTime?
>>>
>>> -Bryan
>>>
>>> On Mon, Jul 27, 2015 at 1:40 PM, Jaimin Jetly <ja...@hortonworks.com>
>>> wrote:
>>>
>>>>  Hi Bryan,
>>>>
>>>>
>>>>  There are 2 steps in this that needs to be achieved.
>>>>
>>>>
>>>>  STEP-1:  Exposing service metrics successfully through Ambari REST API
>>>>
>>>> STEP-2:  Ambari UI displaying widgets comprised from newly exposed
>>>> metrics via Ambari server.
>>>>
>>>>
>>>>
>>>>  As step-1 is pre-requisite to step-2, can you confirm that you were
>>>> able to achieve step-1 (exposing service metrics successfully through
>>>> Ambari REST API) ?
>>>>
>>>>
>>>>  *NOTE:*
>>>> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json
>>>> are the metrics specific to Ambari metrics service. If the new metrics
>>>> that you want to expose are related to any other service then please
>>>> edit/create metrics.json file in that specific service package and not in Ambari
>>>> metrics service package. widgets.json also needs to be changed/added in the
>>>> same service package and not at
>>>> /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json (unless
>>>> you want to add system heatmaps for a stack that inherits HDP-2.0.6 stack).
>>>>
>>>>
>>>>
>>>>  -- Thanks
>>>>
>>>>     Jaimin
>>>>  ------------------------------
>>>> *From:* Bryan Bende <bb...@gmail.com>
>>>> *Sent:* Sunday, July 26, 2015 2:10 PM
>>>>
>>>> *To:* user@ambari.apache.org
>>>> *Subject:* Re: Posting Metrics to Ambari
>>>>
>>>>
>>>> Hi Sid,
>>>>
>>>> Thanks for the pointers about how to add a metric to the UI. Based on
>>>> those instructions I modified
>>>> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json
>>>> and added the following based on the test metrics I posted:
>>>>
>>>> "metrics/SmokeTest/FakeMetric": {
>>>>
>>>>               "metric": "AMBARI_METRICS.SmokeTest.FakeMetric",
>>>>
>>>>               "pointInTime": true,
>>>>
>>>>               "temporal": true
>>>>
>>>>             }
>>>>
>>>> From digging around the filesystem there appears to be a widgets.json
>>>> in /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json. It looks
>>>> like this file only contained the definitions of the heatmaps, so I wasn't
>>>> sure if this was the right place, but just to see what happened I modified
>>>> it as follows:
>>>>
>>>> 1) Added a whole new layout:
>>>>
>>>> http://pastebin.com/KqeT8xfe
>>>>
>>>> 2) Added a heatmap for the test metric:
>>>>
>>>> http://pastebin.com/AQDT7u6v
>>>>
>>>> Then I restarted the HDP VM but I don't see anything in the UI under
>>>> Metric Actions -> Add, or under Heatmaps. Anything that seems completely
>>>> wrong about what I did? Maybe I should be going down the route of defining
>>>> a new service type for system I will be sending metrics from?
>>>>
>>>> Sorry to keep bothering with all these questions, I just don't have any
>>>> previous experience with Ambari.
>>>>
>>>> Thanks,
>>>>
>>>> Bryan
>>>>
>>>> On Sun, Jul 26, 2015 at 12:10 AM, Siddharth Wagle <
>>>> swagle@hortonworks.com> wrote:
>>>>
>>>>>  The AMS API does not allow open ended queries so startTime and
>>>>> endTime are required fields, the curl call should return the error code
>>>>> with the apt response.
>>>>>
>>>>>
>>>>>  If this doesn't happen please go ahead and file a Jira.
>>>>>
>>>>>
>>>>>  Using AMS through Ambari UI after getting the plumbing work with
>>>>> metrics.json completed would be much easier. The AMS API does need some
>>>>> refinement. Jiras / Bugs are welcome.
>>>>>
>>>>>
>>>>>  -Sid
>>>>>
>>>>>
>>>>>
>>>>>  ------------------------------
>>>>> *From:* Siddharth Wagle <sw...@hortonworks.com>
>>>>> *Sent:* Saturday, July 25, 2015 9:01 PM
>>>>>
>>>>> *To:* user@ambari.apache.org
>>>>> *Subject:* Re: Posting Metrics to Ambari
>>>>>
>>>>>
>>>>> No dev work need only need to modify metrics.json file and then add
>>>>> widget from UI.
>>>>>
>>>>>
>>>>>  Stack details:
>>>>>
>>>>>
>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics
>>>>>
>>>>>
>>>>>  UI specifics:
>>>>>
>>>>>
>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard
>>>>>
>>>>>
>>>>>  -Sid
>>>>>
>>>>>
>>>>>  ------------------------------
>>>>> *From:* Bryan Bende <bb...@gmail.com>
>>>>> *Sent:* Saturday, July 25, 2015 7:10 PM
>>>>> *To:* user@ambari.apache.org
>>>>> *Subject:* Re: Posting Metrics to Ambari
>>>>>
>>>>>  Quick update, I was able to connect with the phoenix 4.2.2 client
>>>>> and I did get results querying with:
>>>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME =
>>>>> 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;
>>>>>
>>>>>  Now that I know the metrics are posting, I am less concerned about
>>>>> querying through the REST API.
>>>>>
>>>>>  Is there any way to get a custom metric added to the main page of
>>>>> Ambari? or does this require development work?
>>>>>
>>>>>  Thanks,
>>>>>
>>>>>  Bryan
>>>>>
>>>>> On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com> wrote:
>>>>>
>>>>>> Hi Sid,
>>>>>>
>>>>>>  Thanks for the suggestions. I turned on DEBUG for the metrics
>>>>>> collector (had to do this through the Ambari UI configs section) and now I
>>>>>> can see some activity... When I post a metric I see:
>>>>>>
>>>>>>  01:30:18,372 DEBUG [95266635@qtp-171166092-2 -
>>>>>> /ws/v1/timeline/metrics] TimelineWebServices:270 - Storing metrics: {
>>>>>>
>>>>>>   "metrics" : [ {
>>>>>>
>>>>>>     "timestamp" : 1432075898000,
>>>>>>
>>>>>>     "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",
>>>>>>
>>>>>>     "appid" : "amssmoketestfake",
>>>>>>
>>>>>>     "hostname" : "localhost",
>>>>>>
>>>>>>     "starttime" : 1432075898000,
>>>>>>
>>>>>>     "metrics" : {
>>>>>>
>>>>>>       "1432075898000" : 0.963781711428,
>>>>>>
>>>>>>       "1432075899000" : 1.432075898E12
>>>>>>
>>>>>>     }
>>>>>>
>>>>>>   } ]
>>>>>>
>>>>>> }
>>>>>>
>>>>>> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 -
>>>>>> /ws/v1/timeline/metrics] DefaultPhoenixDataSource:67 - Metric store
>>>>>> connection url: jdbc:phoenix:localhost:61181:/hbase
>>>>>>
>>>>>> 01:30:18,376 DEBUG [95266635@qtp-171166092-2 -
>>>>>> /ws/v1/timeline/metrics] MutationState:361 - Sending 2 mutations for
>>>>>> METRIC_RECORD with 8 key values of total size 925 bytes
>>>>>>
>>>>>> 01:30:18,380 DEBUG [95266635@qtp-171166092-2 -
>>>>>> /ws/v1/timeline/metrics] MutationState:436 - Total time for batch call of
>>>>>> 2 mutations into METRIC_RECORD: 3 ms
>>>>>>
>>>>>> 01:30:18,381 DEBUG [95266635@qtp-171166092-2 -
>>>>>> /ws/v1/timeline/metrics] log:40 - RESPONSE /ws/v1/timeline/metrics  200
>>>>>>
>>>>>>
>>>>>>  So it looks like it posted successfully. Then I hit:
>>>>>>
>>>>>>
>>>>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric
>>>>>>
>>>>>> and I see...
>>>>>>
>>>>>> 01:31:16,952 DEBUG [95266635@qtp-171166092-2 -
>>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>>> ParallelIterators:412 - Guideposts: ]
>>>>>>
>>>>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>>> ParallelIterators:481 - The parallelScans:
>>>>>> [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]
>>>>>>
>>>>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>>> BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1,
>>>>>> count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]
>>>>>>
>>>>>> 01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id:
>>>>>> d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan:
>>>>>> {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}
>>>>>>
>>>>>> 01:31:16,959 DEBUG [95266635@qtp-171166092-2 -
>>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>>> PhoenixHBaseAccessor:552 - Aggregate records size: 0
>>>>>>
>>>>>> I'll see if I can get the phoenix client working and see what that
>>>>>> returns.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Bryan
>>>>>>
>>>>>> On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <
>>>>>> swagle@hortonworks.com> wrote:
>>>>>>
>>>>>>>  Hi Bryan,
>>>>>>>
>>>>>>>
>>>>>>>  Few things you can do:
>>>>>>>
>>>>>>>
>>>>>>>  1. Turn on DEBUG mode by changing log4j.properties at,
>>>>>>> /etc/ambari-metrics-collector/conf/
>>>>>>>
>>>>>>> This might reveal more info, I don't think we print every metrics
>>>>>>> received to the log in 2.0 or 2.1, I did add this option if TRACE is
>>>>>>> enabled to trunk recently.
>>>>>>>
>>>>>>>
>>>>>>>  2. Connect using Phoenix directly and you can do a SELECT query
>>>>>>> like this:
>>>>>>>
>>>>>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>'
>>>>>>> order by SERVER_TIME desc limit 10;
>>>>>>>
>>>>>>>
>>>>>>>  Instructions for connecting to Phoenix:
>>>>>>>
>>>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema
>>>>>>>
>>>>>>>
>>>>>>>  3. What API call are you making to get metrics?
>>>>>>>
>>>>>>> E.g.: http://
>>>>>>> <ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>
>>>>>>>
>>>>>>>
>>>>>>>  -Sid
>>>>>>>
>>>>>>>
>>>>>>>  ------------------------------
>>>>>>> *From:* Bryan Bende <bb...@gmail.com>
>>>>>>> *Sent:* Friday, July 24, 2015 2:03 PM
>>>>>>> *To:* user@ambari.apache.org
>>>>>>> *Subject:* Posting Metrics to Ambari
>>>>>>>
>>>>>>>   I'm interested in sending metrics to Ambari and I've been looking
>>>>>>> at the Metrics Collector REST API described here:
>>>>>>>
>>>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>>>>>>>
>>>>>>>  I figured the easiest way to test it would be to get the latest
>>>>>>> HDP Sandbox... so I downloaded and started it up. The Metrics Collector
>>>>>>> service wasn't running so I started it, and also added port 6188 to the VM
>>>>>>> port forwarding. From there I used the example POST on the Wiki page and
>>>>>>> made a successful POST which got a 200 response. After that I tried the
>>>>>>> query, but could never get any results to come back.
>>>>>>>
>>>>>>>  I know this list is not specific to HDP, but I was wondering if
>>>>>>> anyone has any suggestions as to what I can look at to figure out what is
>>>>>>> happening with the data I am posting.
>>>>>>>
>>>>>>>  I was watching the metrics collector log while posting and
>>>>>>> querying and didn't see any activity besides the periodic aggregation.
>>>>>>>
>>>>>>>  Any suggestions would be greatly appreciated.
>>>>>>>
>>>>>>>  Thanks,
>>>>>>>
>>>>>>>  Bran
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Posting Metrics to Ambari

Posted by Bryan Bende <bb...@gmail.com>.
As an update, I was able to create a new service and get it installed in
Ambari, and got a widget to display on the metrics panel for the service.

So now it seems like the missing piece is getting the metrics exposed
through the Ambari REST API, which may or may not be related to not getting
results from the collector service API. I have a metrics.json with the
following:

{
>   "NIFI_MASTER": {
>     "Component": [{
>         "type": "ganglia",
>         "metrics": {
>           "default": {
>             "metrics/nifi/FlowFilesReceivedLast5mins": {
>               "metric": "FlowFiles_Received_Last_5_mins",
>               "pointInTime": false,
>               "temporal": true
>             }
>           }
>         }
>     }]
>   }
> }


and widgets.json with the following:

{
>   "layouts": [
>     {
>       "layout_name": "default_nifi_dashboard",
>       "display_name": "Standard NiFi Dashboard",
>       "section_name": "NIFI_SUMMARY",
>       "widgetLayoutInfo": [
>         {
>           "widget_name": "Flow Files Received Last 5 mins",
>           "description": "The number of flow files received in the last 5
> minutes.",
>           "widget_type": "GRAPH",
>           "is_visible": true,
>           "metrics": [
>             {
>               "name": "FlowFiles_Received_Last_5_mins",
>               "metric_path": "metrics/nifi/FlowFilesReceivedLast5mins",
>               "service_name": "NIFI",
>               "component_name": "NIFI_MASTER"
>             }
>           ],
>           "values": [
>             {
>               "name": "Flow Files Received",
>               "value": "${FlowFiles_Received_Last_5_mins}"
>             }
>           ],
>           "properties": {
>             "display_unit": "%",
>             "graph_type": "LINE",
>             "time_range": "1"
>           }
>         }
>       ]
>     }
>   ]
> }


Hitting this end-point doesn't show any metrics though:
http://localhost:8080/api/v1/clusters/Sandbox/services/NIFI/components/NIFI_MASTER

-Bryan


On Tue, Jul 28, 2015 at 9:39 AM, Bryan Bende <bb...@gmail.com> wrote:

> The data is present in the aggregate tables...
>
> 0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from *METRIC_RECORD*
> WHERE METRIC_NAME = 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME
> desc limit 10;
>
>
> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>
> |               METRIC_NAME                |                 HOSTNAME
>             |               SERVER_TIME                |
> APP_ID                  |               INSTANCE_ID                 |
>
>
> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>
> | FlowFiles_Received_Last_5_mins           | localhost
>       | 1438047369541                            | NIFI
>                   | 5dbaaa80-0760-4241-80aa-b00b52f8efb4      |
>
> 0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from
> *METRIC_RECORD_MINUTE* WHERE METRIC_NAME =
> 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc limit 10;
>
>
> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>
> |               METRIC_NAME                |                 HOSTNAME
>             |                  APP_ID                  |
> INSTANCE_ID                |               SERVER_TIME                 |
>
>
> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>
> | FlowFiles_Received_Last_5_mins           | localhost
>       | NIFI                                     | 5dbaaa80-0760-4241-80aa-b00b52f8efb4
>     | 1438047369541                             |
>
>
> 0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from
> *METRIC_RECORD_HOURLY* WHERE METRIC_NAME =
> 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc limit 10;
>
>
> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>
> |               METRIC_NAME                |                 HOSTNAME
>             |                  APP_ID                  |
> INSTANCE_ID                |               SERVER_TIME                 |
>
>
> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+
>
> | FlowFiles_Received_Last_5_mins           | localhost
>       | NIFI                                     | 5dbaaa80-0760-4241-80aa-b00b52f8efb4
>     | 1438045569276                             |
>
>
> Trying a smaller time range (2 mins surrounding the timestamp from the
> first record above)....
>
>
> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000
>
>
> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=seconds
>
>
> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=minutes
>
>
> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=hours
>
>
> Those all get no results. The only time I got a difference response, was
> this example:
>
>
> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=143804556927&endTime=1438047420000
>
> which returned:
>
> {"exception":"BadRequestException","message":"java.lang.Exception: The time range query for precision table exceeds row count limit, please query aggregate table instead.","javaClassName":"org.apache.hadoop.yarn.webapp.BadRequestException"}
>
>
> On Mon, Jul 27, 2015 at 10:50 PM, Siddharth Wagle <sw...@hortonworks.com>
> wrote:
>
>>  For Step1, when you say exposing metrics through the Ambari REST API...
>> are you talking about the metrics collector REST API, or through the Ambari
>> Server REST API?
>>
>> Answer: Ambari REST API: Note that this is intended use because this is
>> what ties the metrics to you your cluster resources, example: You can query
>> for say give me metrics for the active Namenode only using Ambari's API.
>>
>>
>>
>>  Is SERVER_TIME the field that has to fall between startTime and endTime?
>>
>> Yes. That is correct
>>
>>
>>  There is nothing special about the query  you seem to have the
>> fragments right, only this is you are query for a large time window,
>> AMS would not return data from METRIC_RECORD table for a such a large time
>> window it would try to find this in the aggregate
>> table, METRIC_RECORD_MINUTE or HOURLY. Try reducing you time range,
>> also check the aggregate tables, the data should still be present in those
>> tables.
>>
>>
>>  Precision params:
>>
>>
>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>>
>>
>>  -Sid
>>
>>
>>  ------------------------------
>> *From:* Bryan Bende <bb...@gmail.com>
>> *Sent:* Monday, July 27, 2015 6:21 PM
>>
>> *To:* user@ambari.apache.org
>> *Subject:* Re: Posting Metrics to Ambari
>>
>>  Hi Jaimin,
>>
>>  For Step1, when you say exposing metrics through the Ambari REST API...
>> are you talking about the metrics collector REST API, or through the Ambari
>> Server REST API?
>>
>>  I am able to see data through Phoenix, as an example:
>>
>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+
>>
>> |               METRIC_NAME                |                 HOSTNAME
>>               |               SERVER_TIME                |
>>     APP_ID                  |
>>
>>
>> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+
>>
>> | FlowFiles_Received_Last_5_mins           | localhost
>>         | 1438045869329                            | NIFI
>>                       |
>>
>>
>>  Then I try to use this API call:
>>
>>
>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1437870332000&endTime=1438129532000
>>
>> and I get: {"metrics":[]}
>>
>> Something must not be lining up with what I am sending over. Is
>> SERVER_TIME the field that has to fall between startTime and endTime?
>>
>> -Bryan
>>
>> On Mon, Jul 27, 2015 at 1:40 PM, Jaimin Jetly <ja...@hortonworks.com>
>> wrote:
>>
>>>  Hi Bryan,
>>>
>>>
>>>  There are 2 steps in this that needs to be achieved.
>>>
>>>
>>>  STEP-1:  Exposing service metrics successfully through Ambari REST API
>>>
>>> STEP-2:  Ambari UI displaying widgets comprised from newly exposed
>>> metrics via Ambari server.
>>>
>>>
>>>
>>>  As step-1 is pre-requisite to step-2, can you confirm that you were
>>> able to achieve step-1 (exposing service metrics successfully through
>>> Ambari REST API) ?
>>>
>>>
>>>  *NOTE:*
>>> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json
>>> are the metrics specific to Ambari metrics service. If the new metrics
>>> that you want to expose are related to any other service then please
>>> edit/create metrics.json file in that specific service package and not in Ambari
>>> metrics service package. widgets.json also needs to be changed/added in the
>>> same service package and not at
>>> /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json (unless
>>> you want to add system heatmaps for a stack that inherits HDP-2.0.6 stack).
>>>
>>>
>>>
>>>  -- Thanks
>>>
>>>     Jaimin
>>>  ------------------------------
>>> *From:* Bryan Bende <bb...@gmail.com>
>>> *Sent:* Sunday, July 26, 2015 2:10 PM
>>>
>>> *To:* user@ambari.apache.org
>>> *Subject:* Re: Posting Metrics to Ambari
>>>
>>>
>>> Hi Sid,
>>>
>>> Thanks for the pointers about how to add a metric to the UI. Based on
>>> those instructions I modified
>>> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json
>>> and added the following based on the test metrics I posted:
>>>
>>> "metrics/SmokeTest/FakeMetric": {
>>>
>>>               "metric": "AMBARI_METRICS.SmokeTest.FakeMetric",
>>>
>>>               "pointInTime": true,
>>>
>>>               "temporal": true
>>>
>>>             }
>>>
>>> From digging around the filesystem there appears to be a widgets.json in
>>> /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json. It looks
>>> like this file only contained the definitions of the heatmaps, so I wasn't
>>> sure if this was the right place, but just to see what happened I modified
>>> it as follows:
>>>
>>> 1) Added a whole new layout:
>>>
>>> http://pastebin.com/KqeT8xfe
>>>
>>> 2) Added a heatmap for the test metric:
>>>
>>> http://pastebin.com/AQDT7u6v
>>>
>>> Then I restarted the HDP VM but I don't see anything in the UI under
>>> Metric Actions -> Add, or under Heatmaps. Anything that seems completely
>>> wrong about what I did? Maybe I should be going down the route of defining
>>> a new service type for system I will be sending metrics from?
>>>
>>> Sorry to keep bothering with all these questions, I just don't have any
>>> previous experience with Ambari.
>>>
>>> Thanks,
>>>
>>> Bryan
>>>
>>> On Sun, Jul 26, 2015 at 12:10 AM, Siddharth Wagle <
>>> swagle@hortonworks.com> wrote:
>>>
>>>>  The AMS API does not allow open ended queries so startTime and
>>>> endTime are required fields, the curl call should return the error code
>>>> with the apt response.
>>>>
>>>>
>>>>  If this doesn't happen please go ahead and file a Jira.
>>>>
>>>>
>>>>  Using AMS through Ambari UI after getting the plumbing work with
>>>> metrics.json completed would be much easier. The AMS API does need some
>>>> refinement. Jiras / Bugs are welcome.
>>>>
>>>>
>>>>  -Sid
>>>>
>>>>
>>>>
>>>>  ------------------------------
>>>> *From:* Siddharth Wagle <sw...@hortonworks.com>
>>>> *Sent:* Saturday, July 25, 2015 9:01 PM
>>>>
>>>> *To:* user@ambari.apache.org
>>>> *Subject:* Re: Posting Metrics to Ambari
>>>>
>>>>
>>>> No dev work need only need to modify metrics.json file and then add
>>>> widget from UI.
>>>>
>>>>
>>>>  Stack details:
>>>>
>>>> https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics
>>>>
>>>>
>>>>  UI specifics:
>>>>
>>>>
>>>> https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard
>>>>
>>>>
>>>>  -Sid
>>>>
>>>>
>>>>  ------------------------------
>>>> *From:* Bryan Bende <bb...@gmail.com>
>>>> *Sent:* Saturday, July 25, 2015 7:10 PM
>>>> *To:* user@ambari.apache.org
>>>> *Subject:* Re: Posting Metrics to Ambari
>>>>
>>>>  Quick update, I was able to connect with the phoenix 4.2.2 client and
>>>> I did get results querying with:
>>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME =
>>>> 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;
>>>>
>>>>  Now that I know the metrics are posting, I am less concerned about
>>>> querying through the REST API.
>>>>
>>>>  Is there any way to get a custom metric added to the main page of
>>>> Ambari? or does this require development work?
>>>>
>>>>  Thanks,
>>>>
>>>>  Bryan
>>>>
>>>> On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com> wrote:
>>>>
>>>>> Hi Sid,
>>>>>
>>>>>  Thanks for the suggestions. I turned on DEBUG for the metrics
>>>>> collector (had to do this through the Ambari UI configs section) and now I
>>>>> can see some activity... When I post a metric I see:
>>>>>
>>>>>  01:30:18,372 DEBUG [95266635@qtp-171166092-2 -
>>>>> /ws/v1/timeline/metrics] TimelineWebServices:270 - Storing metrics: {
>>>>>
>>>>>   "metrics" : [ {
>>>>>
>>>>>     "timestamp" : 1432075898000,
>>>>>
>>>>>     "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",
>>>>>
>>>>>     "appid" : "amssmoketestfake",
>>>>>
>>>>>     "hostname" : "localhost",
>>>>>
>>>>>     "starttime" : 1432075898000,
>>>>>
>>>>>     "metrics" : {
>>>>>
>>>>>       "1432075898000" : 0.963781711428,
>>>>>
>>>>>       "1432075899000" : 1.432075898E12
>>>>>
>>>>>     }
>>>>>
>>>>>   } ]
>>>>>
>>>>> }
>>>>>
>>>>> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 -
>>>>> /ws/v1/timeline/metrics] DefaultPhoenixDataSource:67 - Metric store
>>>>> connection url: jdbc:phoenix:localhost:61181:/hbase
>>>>>
>>>>> 01:30:18,376 DEBUG [95266635@qtp-171166092-2 -
>>>>> /ws/v1/timeline/metrics] MutationState:361 - Sending 2 mutations for
>>>>> METRIC_RECORD with 8 key values of total size 925 bytes
>>>>>
>>>>> 01:30:18,380 DEBUG [95266635@qtp-171166092-2 -
>>>>> /ws/v1/timeline/metrics] MutationState:436 - Total time for batch call of
>>>>> 2 mutations into METRIC_RECORD: 3 ms
>>>>>
>>>>> 01:30:18,381 DEBUG [95266635@qtp-171166092-2 -
>>>>> /ws/v1/timeline/metrics] log:40 - RESPONSE /ws/v1/timeline/metrics  200
>>>>>
>>>>>
>>>>>  So it looks like it posted successfully. Then I hit:
>>>>>
>>>>>
>>>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric
>>>>>
>>>>> and I see...
>>>>>
>>>>> 01:31:16,952 DEBUG [95266635@qtp-171166092-2 -
>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>> ParallelIterators:412 - Guideposts: ]
>>>>>
>>>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>> ParallelIterators:481 - The parallelScans:
>>>>> [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]
>>>>>
>>>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>> BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1,
>>>>> count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]
>>>>>
>>>>> 01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id:
>>>>> d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan:
>>>>> {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}
>>>>>
>>>>> 01:31:16,959 DEBUG [95266635@qtp-171166092-2 -
>>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>>> PhoenixHBaseAccessor:552 - Aggregate records size: 0
>>>>>
>>>>> I'll see if I can get the phoenix client working and see what that
>>>>> returns.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Bryan
>>>>>
>>>>> On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <
>>>>> swagle@hortonworks.com> wrote:
>>>>>
>>>>>>  Hi Bryan,
>>>>>>
>>>>>>
>>>>>>  Few things you can do:
>>>>>>
>>>>>>
>>>>>>  1. Turn on DEBUG mode by changing log4j.properties at,
>>>>>> /etc/ambari-metrics-collector/conf/
>>>>>>
>>>>>> This might reveal more info, I don't think we print every metrics
>>>>>> received to the log in 2.0 or 2.1, I did add this option if TRACE is
>>>>>> enabled to trunk recently.
>>>>>>
>>>>>>
>>>>>>  2. Connect using Phoenix directly and you can do a SELECT query
>>>>>> like this:
>>>>>>
>>>>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>'
>>>>>> order by SERVER_TIME desc limit 10;
>>>>>>
>>>>>>
>>>>>>  Instructions for connecting to Phoenix:
>>>>>>
>>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema
>>>>>>
>>>>>>
>>>>>>  3. What API call are you making to get metrics?
>>>>>>
>>>>>> E.g.: http://
>>>>>> <ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>
>>>>>>
>>>>>>
>>>>>>  -Sid
>>>>>>
>>>>>>
>>>>>>  ------------------------------
>>>>>> *From:* Bryan Bende <bb...@gmail.com>
>>>>>> *Sent:* Friday, July 24, 2015 2:03 PM
>>>>>> *To:* user@ambari.apache.org
>>>>>> *Subject:* Posting Metrics to Ambari
>>>>>>
>>>>>>   I'm interested in sending metrics to Ambari and I've been looking
>>>>>> at the Metrics Collector REST API described here:
>>>>>>
>>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>>>>>>
>>>>>>  I figured the easiest way to test it would be to get the latest HDP
>>>>>> Sandbox... so I downloaded and started it up. The Metrics Collector service
>>>>>> wasn't running so I started it, and also added port 6188 to the VM port
>>>>>> forwarding. From there I used the example POST on the Wiki page and made a
>>>>>> successful POST which got a 200 response. After that I tried the query, but
>>>>>> could never get any results to come back.
>>>>>>
>>>>>>  I know this list is not specific to HDP, but I was wondering if
>>>>>> anyone has any suggestions as to what I can look at to figure out what is
>>>>>> happening with the data I am posting.
>>>>>>
>>>>>>  I was watching the metrics collector log while posting and querying
>>>>>> and didn't see any activity besides the periodic aggregation.
>>>>>>
>>>>>>  Any suggestions would be greatly appreciated.
>>>>>>
>>>>>>  Thanks,
>>>>>>
>>>>>>  Bran
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Posting Metrics to Ambari

Posted by Bryan Bende <bb...@gmail.com>.
The data is present in the aggregate tables...

0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from *METRIC_RECORD* WHERE
METRIC_NAME = 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc
limit 10;

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

|               METRIC_NAME                |                 HOSTNAME
          |               SERVER_TIME                |
APP_ID                  |               INSTANCE_ID                 |

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

| FlowFiles_Received_Last_5_mins           | localhost
    | 1438047369541                            | NIFI
              | 5dbaaa80-0760-4241-80aa-b00b52f8efb4      |

0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from *METRIC_RECORD_MINUTE*
WHERE METRIC_NAME = 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME
desc limit 10;

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

|               METRIC_NAME                |                 HOSTNAME
          |                  APP_ID                  |
INSTANCE_ID                |               SERVER_TIME                 |

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

| FlowFiles_Received_Last_5_mins           | localhost
    | NIFI                                     |
5dbaaa80-0760-4241-80aa-b00b52f8efb4
    | 1438047369541                             |


0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from *METRIC_RECORD_HOURLY*
WHERE METRIC_NAME = 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME
desc limit 10;

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

|               METRIC_NAME                |                 HOSTNAME
          |                  APP_ID                  |
INSTANCE_ID                |               SERVER_TIME                 |

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+-------------------------------------------+

| FlowFiles_Received_Last_5_mins           | localhost
    | NIFI                                     |
5dbaaa80-0760-4241-80aa-b00b52f8efb4
    | 1438045569276                             |


Trying a smaller time range (2 mins surrounding the timestamp from the
first record above)....

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=seconds

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=minutes

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1438047300000&endTime=1438047420000&precision=hours


Those all get no results. The only time I got a difference response, was
this example:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=143804556927&endTime=1438047420000

which returned:

{"exception":"BadRequestException","message":"java.lang.Exception: The
time range query for precision table exceeds row count limit, please
query aggregate table
instead.","javaClassName":"org.apache.hadoop.yarn.webapp.BadRequestException"}


On Mon, Jul 27, 2015 at 10:50 PM, Siddharth Wagle <sw...@hortonworks.com>
wrote:

>  For Step1, when you say exposing metrics through the Ambari REST API...
> are you talking about the metrics collector REST API, or through the Ambari
> Server REST API?
>
> Answer: Ambari REST API: Note that this is intended use because this is
> what ties the metrics to you your cluster resources, example: You can query
> for say give me metrics for the active Namenode only using Ambari's API.
>
>
>
>  Is SERVER_TIME the field that has to fall between startTime and endTime?
>
> Yes. That is correct
>
>
>  There is nothing special about the query  you seem to have the fragments
> right, only this is you are query for a large time window, AMS would not
> return data from METRIC_RECORD table for a such a large time window it
> would try to find this in the aggregate table, METRIC_RECORD_MINUTE
> or HOURLY. Try reducing you time range, also check the aggregate
> tables, the data should still be present in those tables.
>
>
>  Precision params:
>
>
> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>
>
>  -Sid
>
>
>  ------------------------------
> *From:* Bryan Bende <bb...@gmail.com>
> *Sent:* Monday, July 27, 2015 6:21 PM
>
> *To:* user@ambari.apache.org
> *Subject:* Re: Posting Metrics to Ambari
>
>  Hi Jaimin,
>
>  For Step1, when you say exposing metrics through the Ambari REST API...
> are you talking about the metrics collector REST API, or through the Ambari
> Server REST API?
>
>  I am able to see data through Phoenix, as an example:
>
> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+
>
> |               METRIC_NAME                |                 HOSTNAME
>             |               SERVER_TIME                |
> APP_ID                  |
>
>
> +------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+
>
> | FlowFiles_Received_Last_5_mins           | localhost
>       | 1438045869329                            | NIFI
>                   |
>
>
>  Then I try to use this API call:
>
>
> http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1437870332000&endTime=1438129532000
>
> and I get: {"metrics":[]}
>
> Something must not be lining up with what I am sending over. Is
> SERVER_TIME the field that has to fall between startTime and endTime?
>
> -Bryan
>
> On Mon, Jul 27, 2015 at 1:40 PM, Jaimin Jetly <ja...@hortonworks.com>
> wrote:
>
>>  Hi Bryan,
>>
>>
>>  There are 2 steps in this that needs to be achieved.
>>
>>
>>  STEP-1:  Exposing service metrics successfully through Ambari REST API
>>
>> STEP-2:  Ambari UI displaying widgets comprised from newly exposed
>> metrics via Ambari server.
>>
>>
>>
>>  As step-1 is pre-requisite to step-2, can you confirm that you were
>> able to achieve step-1 (exposing service metrics successfully through
>> Ambari REST API) ?
>>
>>
>>  *NOTE:*
>> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json
>> are the metrics specific to Ambari metrics service. If the new metrics
>> that you want to expose are related to any other service then please
>> edit/create metrics.json file in that specific service package and not in Ambari
>> metrics service package. widgets.json also needs to be changed/added in the
>> same service package and not at
>> /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json (unless
>> you want to add system heatmaps for a stack that inherits HDP-2.0.6 stack).
>>
>>
>>
>>  -- Thanks
>>
>>     Jaimin
>>  ------------------------------
>> *From:* Bryan Bende <bb...@gmail.com>
>> *Sent:* Sunday, July 26, 2015 2:10 PM
>>
>> *To:* user@ambari.apache.org
>> *Subject:* Re: Posting Metrics to Ambari
>>
>>
>> Hi Sid,
>>
>> Thanks for the pointers about how to add a metric to the UI. Based on
>> those instructions I modified
>> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json
>> and added the following based on the test metrics I posted:
>>
>> "metrics/SmokeTest/FakeMetric": {
>>
>>               "metric": "AMBARI_METRICS.SmokeTest.FakeMetric",
>>
>>               "pointInTime": true,
>>
>>               "temporal": true
>>
>>             }
>>
>> From digging around the filesystem there appears to be a widgets.json in
>> /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json. It looks
>> like this file only contained the definitions of the heatmaps, so I wasn't
>> sure if this was the right place, but just to see what happened I modified
>> it as follows:
>>
>> 1) Added a whole new layout:
>>
>> http://pastebin.com/KqeT8xfe
>>
>> 2) Added a heatmap for the test metric:
>>
>> http://pastebin.com/AQDT7u6v
>>
>> Then I restarted the HDP VM but I don't see anything in the UI under
>> Metric Actions -> Add, or under Heatmaps. Anything that seems completely
>> wrong about what I did? Maybe I should be going down the route of defining
>> a new service type for system I will be sending metrics from?
>>
>> Sorry to keep bothering with all these questions, I just don't have any
>> previous experience with Ambari.
>>
>> Thanks,
>>
>> Bryan
>>
>> On Sun, Jul 26, 2015 at 12:10 AM, Siddharth Wagle <swagle@hortonworks.com
>> > wrote:
>>
>>>  The AMS API does not allow open ended queries so startTime and endTime
>>> are required fields, the curl call should return the error code with the
>>> apt response.
>>>
>>>
>>>  If this doesn't happen please go ahead and file a Jira.
>>>
>>>
>>>  Using AMS through Ambari UI after getting the plumbing work with
>>> metrics.json completed would be much easier. The AMS API does need some
>>> refinement. Jiras / Bugs are welcome.
>>>
>>>
>>>  -Sid
>>>
>>>
>>>
>>>  ------------------------------
>>> *From:* Siddharth Wagle <sw...@hortonworks.com>
>>> *Sent:* Saturday, July 25, 2015 9:01 PM
>>>
>>> *To:* user@ambari.apache.org
>>> *Subject:* Re: Posting Metrics to Ambari
>>>
>>>
>>> No dev work need only need to modify metrics.json file and then add
>>> widget from UI.
>>>
>>>
>>>  Stack details:
>>>
>>> https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics
>>>
>>>
>>>  UI specifics:
>>>
>>>
>>> https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard
>>>
>>>
>>>  -Sid
>>>
>>>
>>>  ------------------------------
>>> *From:* Bryan Bende <bb...@gmail.com>
>>> *Sent:* Saturday, July 25, 2015 7:10 PM
>>> *To:* user@ambari.apache.org
>>> *Subject:* Re: Posting Metrics to Ambari
>>>
>>>  Quick update, I was able to connect with the phoenix 4.2.2 client and
>>> I did get results querying with:
>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME =
>>> 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;
>>>
>>>  Now that I know the metrics are posting, I am less concerned about
>>> querying through the REST API.
>>>
>>>  Is there any way to get a custom metric added to the main page of
>>> Ambari? or does this require development work?
>>>
>>>  Thanks,
>>>
>>>  Bryan
>>>
>>> On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com> wrote:
>>>
>>>> Hi Sid,
>>>>
>>>>  Thanks for the suggestions. I turned on DEBUG for the metrics
>>>> collector (had to do this through the Ambari UI configs section) and now I
>>>> can see some activity... When I post a metric I see:
>>>>
>>>>  01:30:18,372 DEBUG [95266635@qtp-171166092-2 -
>>>> /ws/v1/timeline/metrics] TimelineWebServices:270 - Storing metrics: {
>>>>
>>>>   "metrics" : [ {
>>>>
>>>>     "timestamp" : 1432075898000,
>>>>
>>>>     "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",
>>>>
>>>>     "appid" : "amssmoketestfake",
>>>>
>>>>     "hostname" : "localhost",
>>>>
>>>>     "starttime" : 1432075898000,
>>>>
>>>>     "metrics" : {
>>>>
>>>>       "1432075898000" : 0.963781711428,
>>>>
>>>>       "1432075899000" : 1.432075898E12
>>>>
>>>>     }
>>>>
>>>>   } ]
>>>>
>>>> }
>>>>
>>>> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 -
>>>> /ws/v1/timeline/metrics] DefaultPhoenixDataSource:67 - Metric store
>>>> connection url: jdbc:phoenix:localhost:61181:/hbase
>>>>
>>>> 01:30:18,376 DEBUG [95266635@qtp-171166092-2 -
>>>> /ws/v1/timeline/metrics] MutationState:361 - Sending 2 mutations for
>>>> METRIC_RECORD with 8 key values of total size 925 bytes
>>>>
>>>> 01:30:18,380 DEBUG [95266635@qtp-171166092-2 -
>>>> /ws/v1/timeline/metrics] MutationState:436 - Total time for batch call of
>>>> 2 mutations into METRIC_RECORD: 3 ms
>>>>
>>>> 01:30:18,381 DEBUG [95266635@qtp-171166092-2 -
>>>> /ws/v1/timeline/metrics] log:40 - RESPONSE /ws/v1/timeline/metrics  200
>>>>
>>>>
>>>>  So it looks like it posted successfully. Then I hit:
>>>>
>>>>
>>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric
>>>>
>>>> and I see...
>>>>
>>>> 01:31:16,952 DEBUG [95266635@qtp-171166092-2 -
>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>> ParallelIterators:412 - Guideposts: ]
>>>>
>>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>> ParallelIterators:481 - The parallelScans:
>>>> [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]
>>>>
>>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>> BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1,
>>>> count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]
>>>>
>>>> 01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id:
>>>> d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan:
>>>> {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}
>>>>
>>>> 01:31:16,959 DEBUG [95266635@qtp-171166092-2 -
>>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>>> PhoenixHBaseAccessor:552 - Aggregate records size: 0
>>>>
>>>> I'll see if I can get the phoenix client working and see what that
>>>> returns.
>>>>
>>>> Thanks,
>>>>
>>>> Bryan
>>>>
>>>> On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <
>>>> swagle@hortonworks.com> wrote:
>>>>
>>>>>  Hi Bryan,
>>>>>
>>>>>
>>>>>  Few things you can do:
>>>>>
>>>>>
>>>>>  1. Turn on DEBUG mode by changing log4j.properties at,
>>>>> /etc/ambari-metrics-collector/conf/
>>>>>
>>>>> This might reveal more info, I don't think we print every metrics
>>>>> received to the log in 2.0 or 2.1, I did add this option if TRACE is
>>>>> enabled to trunk recently.
>>>>>
>>>>>
>>>>>  2. Connect using Phoenix directly and you can do a SELECT query like
>>>>> this:
>>>>>
>>>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>'
>>>>> order by SERVER_TIME desc limit 10;
>>>>>
>>>>>
>>>>>  Instructions for connecting to Phoenix:
>>>>>
>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema
>>>>>
>>>>>
>>>>>  3. What API call are you making to get metrics?
>>>>>
>>>>> E.g.: http://
>>>>> <ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>
>>>>>
>>>>>
>>>>>  -Sid
>>>>>
>>>>>
>>>>>  ------------------------------
>>>>> *From:* Bryan Bende <bb...@gmail.com>
>>>>> *Sent:* Friday, July 24, 2015 2:03 PM
>>>>> *To:* user@ambari.apache.org
>>>>> *Subject:* Posting Metrics to Ambari
>>>>>
>>>>>   I'm interested in sending metrics to Ambari and I've been looking
>>>>> at the Metrics Collector REST API described here:
>>>>>
>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>>>>>
>>>>>  I figured the easiest way to test it would be to get the latest HDP
>>>>> Sandbox... so I downloaded and started it up. The Metrics Collector service
>>>>> wasn't running so I started it, and also added port 6188 to the VM port
>>>>> forwarding. From there I used the example POST on the Wiki page and made a
>>>>> successful POST which got a 200 response. After that I tried the query, but
>>>>> could never get any results to come back.
>>>>>
>>>>>  I know this list is not specific to HDP, but I was wondering if
>>>>> anyone has any suggestions as to what I can look at to figure out what is
>>>>> happening with the data I am posting.
>>>>>
>>>>>  I was watching the metrics collector log while posting and querying
>>>>> and didn't see any activity besides the periodic aggregation.
>>>>>
>>>>>  Any suggestions would be greatly appreciated.
>>>>>
>>>>>  Thanks,
>>>>>
>>>>>  Bran
>>>>>
>>>>
>>>>
>>>
>>
>

Re: Posting Metrics to Ambari

Posted by Siddharth Wagle <sw...@hortonworks.com>.
For Step1, when you say exposing metrics through the Ambari REST API... are you talking about the metrics collector REST API, or through the Ambari Server REST API?

Answer: Ambari REST API: Note that this is intended use because this is what ties the metrics to you your cluster resources, example: You can query for say give me metrics for the active Namenode only using Ambari's API.



Is SERVER_TIME the field that has to fall between startTime and endTime?

Yes. That is correct


There is nothing special about the query  you seem to have the fragments right, only this is you are query for a large time window, AMS would not return data from METRIC_RECORD table for a such a large time window it would try to find this in the aggregate table, METRIC_RECORD_MINUTE or HOURLY. Try reducing you time range, also check the aggregate tables, the data should still be present in those tables.


Precision params:

https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>
Sent: Monday, July 27, 2015 6:21 PM
To: user@ambari.apache.org
Subject: Re: Posting Metrics to Ambari

Hi Jaimin,

For Step1, when you say exposing metrics through the Ambari REST API... are you talking about the metrics collector REST API, or through the Ambari Server REST API?

I am able to see data through Phoenix, as an example:
+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+

|               METRIC_NAME                |                 HOSTNAME                 |               SERVER_TIME                |                  APP_ID                  |

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+

| FlowFiles_Received_Last_5_mins           | localhost                          | 1438045869329                            | NIFI                                     |


Then I try to use this API call:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1437870332000&endTime=1438129532000

and I get: {"metrics":[]}

Something must not be lining up with what I am sending over. Is SERVER_TIME the field that has to fall between startTime and endTime?

-Bryan

On Mon, Jul 27, 2015 at 1:40 PM, Jaimin Jetly <ja...@hortonworks.com>> wrote:

Hi Bryan,


There are 2 steps in this that needs to be achieved.


STEP-1:  Exposing service metrics successfully through Ambari REST API

STEP-2:  Ambari UI displaying widgets comprised from newly exposed metrics via Ambari server.



As step-1 is pre-requisite to step-2, can you confirm that you were able to achieve step-1 (exposing service metrics successfully through Ambari REST API) ?


NOTE: /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json are the metrics specific to Ambari metrics service. If the new metrics that you want to expose are related to any other service then please edit/create metrics.json file in that specific service package and not in Ambari metrics service package. widgets.json also needs to be changed/added in the same service package and not at /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json (unless you want to add system heatmaps for a stack that inherits HDP-2.0.6 stack).



-- Thanks

    Jaimin

________________________________
From: Bryan Bende <bb...@gmail.com>>
Sent: Sunday, July 26, 2015 2:10 PM

To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Posting Metrics to Ambari


Hi Sid,

Thanks for the pointers about how to add a metric to the UI. Based on those instructions I modified /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json and added the following based on the test metrics I posted:

"metrics/SmokeTest/FakeMetric": {

              "metric": "AMBARI_METRICS.SmokeTest.FakeMetric",

              "pointInTime": true,

              "temporal": true

            }

>From digging around the filesystem there appears to be a widgets.json in /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json. It looks like this file only contained the definitions of the heatmaps, so I wasn't sure if this was the right place, but just to see what happened I modified it as follows:

1) Added a whole new layout:

http://pastebin.com/KqeT8xfe

2) Added a heatmap for the test metric:

http://pastebin.com/AQDT7u6v

Then I restarted the HDP VM but I don't see anything in the UI under Metric Actions -> Add, or under Heatmaps. Anything that seems completely wrong about what I did? Maybe I should be going down the route of defining a new service type for system I will be sending metrics from?

Sorry to keep bothering with all these questions, I just don't have any previous experience with Ambari.

Thanks,

Bryan

On Sun, Jul 26, 2015 at 12:10 AM, Siddharth Wagle <sw...@hortonworks.com>> wrote:

The AMS API does not allow open ended queries so startTime and endTime are required fields, the curl call should return the error code with the apt response.


If this doesn't happen please go ahead and file a Jira.


Using AMS through Ambari UI after getting the plumbing work with metrics.json completed would be much easier. The AMS API does need some refinement. Jiras / Bugs are welcome.


-Sid



________________________________
From: Siddharth Wagle <sw...@hortonworks.com>>
Sent: Saturday, July 25, 2015 9:01 PM

To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Posting Metrics to Ambari


No dev work need only need to modify metrics.json file and then add widget from UI.


Stack details:

https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics


UI specifics:

https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>>
Sent: Saturday, July 25, 2015 7:10 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Posting Metrics to Ambari

Quick update, I was able to connect with the phoenix 4.2.2 client and I did get results querying with:
SELECT * from METRIC_RECORD WHERE METRIC_NAME = 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;

Now that I know the metrics are posting, I am less concerned about querying through the REST API.

Is there any way to get a custom metric added to the main page of Ambari? or does this require development work?

Thanks,

Bryan

On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com>> wrote:
Hi Sid,

Thanks for the suggestions. I turned on DEBUG for the metrics collector (had to do this through the Ambari UI configs section) and now I can see some activity... When I post a metric I see:


01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] TimelineWebServices:270 - Storing metrics: {

  "metrics" : [ {

    "timestamp" : 1432075898000,

    "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",

    "appid" : "amssmoketestfake",

    "hostname" : "localhost",

    "starttime" : 1432075898000,

    "metrics" : {

      "1432075898000" : 0.963781711428,

      "1432075899000" : 1.432075898E12

    }

  } ]

}

01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] DefaultPhoenixDataSource:67 - Metric store connection url: jdbc:phoenix:localhost:61181:/hbase

01:30:18,376 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] MutationState:361 - Sending 2 mutations for METRIC_RECORD with 8 key values of total size 925 bytes

01:30:18,380 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] MutationState:436 - Total time for batch call of  2 mutations into METRIC_RECORD: 3 ms

01:30:18,381 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] log:40 - RESPONSE /ws/v1/timeline/metrics  200


So it looks like it posted successfully. Then I hit:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric

and I see...

01:31:16,952 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:412 - Guideposts: ]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:481 - The parallelScans: [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1, count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]

01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id: d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan: {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}

01:31:16,959 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] PhoenixHBaseAccessor:552 - Aggregate records size: 0

I'll see if I can get the phoenix client working and see what that returns.

Thanks,

Bryan

On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <sw...@hortonworks.com>> wrote:

Hi Bryan,


Few things you can do:


1. Turn on DEBUG mode by changing log4j.properties at, /etc/ambari-metrics-collector/conf/

This might reveal more info, I don't think we print every metrics received to the log in 2.0 or 2.1, I did add this option if TRACE is enabled to trunk recently.


2. Connect using Phoenix directly and you can do a SELECT query like this:

SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>' order by SERVER_TIME desc limit 10;


Instructions for connecting to Phoenix:

https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema


3. What API call are you making to get metrics?

E.g.: http://<ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>>
Sent: Friday, July 24, 2015 2:03 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Posting Metrics to Ambari

I'm interested in sending metrics to Ambari and I've been looking at the Metrics Collector REST API described here:
https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification

I figured the easiest way to test it would be to get the latest HDP Sandbox... so I downloaded and started it up. The Metrics Collector service wasn't running so I started it, and also added port 6188 to the VM port forwarding. From there I used the example POST on the Wiki page and made a successful POST which got a 200 response. After that I tried the query, but could never get any results to come back.

I know this list is not specific to HDP, but I was wondering if anyone has any suggestions as to what I can look at to figure out what is happening with the data I am posting.

I was watching the metrics collector log while posting and querying and didn't see any activity besides the periodic aggregation.

Any suggestions would be greatly appreciated.

Thanks,

Bran





Re: Posting Metrics to Ambari

Posted by Bryan Bende <bb...@gmail.com>.
Hi Jaimin,

For Step1, when you say exposing metrics through the Ambari REST API... are
you talking about the metrics collector REST API, or through the Ambari
Server REST API?

I am able to see data through Phoenix, as an example:
+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+

|               METRIC_NAME                |                 HOSTNAME
          |               SERVER_TIME                |
APP_ID                  |

+------------------------------------------+------------------------------------------+------------------------------------------+------------------------------------------+

| FlowFiles_Received_Last_5_mins           | localhost
    | 1438045869329                            | NIFI
              |


Then I try to use this API call:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=FlowFiles_Received_Last_5_mins&appId=NIFI&hostname=localhost&startTime=1437870332000&endTime=1438129532000

and I get: {"metrics":[]}

Something must not be lining up with what I am sending over. Is SERVER_TIME
the field that has to fall between startTime and endTime?

-Bryan

On Mon, Jul 27, 2015 at 1:40 PM, Jaimin Jetly <ja...@hortonworks.com>
wrote:

>  Hi Bryan,
>
>
>  There are 2 steps in this that needs to be achieved.
>
>
>  STEP-1:  Exposing service metrics successfully through Ambari REST API
>
> STEP-2:  Ambari UI displaying widgets comprised from newly exposed metrics
> via Ambari server.
>
>
>
>  As step-1 is pre-requisite to step-2, can you confirm that you were able
> to achieve step-1 (exposing service metrics successfully through Ambari
> REST API) ?
>
>
>  *NOTE:*
> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json
> are the metrics specific to Ambari metrics service. If the new metrics
> that you want to expose are related to any other service then please
> edit/create metrics.json file in that specific service package and not in Ambari
> metrics service package. widgets.json also needs to be changed/added in the
> same service package and not at
> /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json (unless
> you want to add system heatmaps for a stack that inherits HDP-2.0.6 stack).
>
>
>
>  -- Thanks
>
>     Jaimin
>  ------------------------------
> *From:* Bryan Bende <bb...@gmail.com>
> *Sent:* Sunday, July 26, 2015 2:10 PM
>
> *To:* user@ambari.apache.org
> *Subject:* Re: Posting Metrics to Ambari
>
>
> Hi Sid,
>
> Thanks for the pointers about how to add a metric to the UI. Based on
> those instructions I modified
> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json
> and added the following based on the test metrics I posted:
>
> "metrics/SmokeTest/FakeMetric": {
>
>               "metric": "AMBARI_METRICS.SmokeTest.FakeMetric",
>
>               "pointInTime": true,
>
>               "temporal": true
>
>             }
>
> From digging around the filesystem there appears to be a widgets.json in
> /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json. It looks
> like this file only contained the definitions of the heatmaps, so I wasn't
> sure if this was the right place, but just to see what happened I modified
> it as follows:
>
> 1) Added a whole new layout:
>
> http://pastebin.com/KqeT8xfe
>
> 2) Added a heatmap for the test metric:
>
> http://pastebin.com/AQDT7u6v
>
> Then I restarted the HDP VM but I don't see anything in the UI under
> Metric Actions -> Add, or under Heatmaps. Anything that seems completely
> wrong about what I did? Maybe I should be going down the route of defining
> a new service type for system I will be sending metrics from?
>
> Sorry to keep bothering with all these questions, I just don't have any
> previous experience with Ambari.
>
> Thanks,
>
> Bryan
>
> On Sun, Jul 26, 2015 at 12:10 AM, Siddharth Wagle <sw...@hortonworks.com>
> wrote:
>
>>  The AMS API does not allow open ended queries so startTime and endTime
>> are required fields, the curl call should return the error code with the
>> apt response.
>>
>>
>>  If this doesn't happen please go ahead and file a Jira.
>>
>>
>>  Using AMS through Ambari UI after getting the plumbing work with
>> metrics.json completed would be much easier. The AMS API does need some
>> refinement. Jiras / Bugs are welcome.
>>
>>
>>  -Sid
>>
>>
>>
>>  ------------------------------
>> *From:* Siddharth Wagle <sw...@hortonworks.com>
>> *Sent:* Saturday, July 25, 2015 9:01 PM
>>
>> *To:* user@ambari.apache.org
>> *Subject:* Re: Posting Metrics to Ambari
>>
>>
>> No dev work need only need to modify metrics.json file and then add
>> widget from UI.
>>
>>
>>  Stack details:
>>
>> https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics
>>
>>
>>  UI specifics:
>>
>>
>> https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard
>>
>>
>>  -Sid
>>
>>
>>  ------------------------------
>> *From:* Bryan Bende <bb...@gmail.com>
>> *Sent:* Saturday, July 25, 2015 7:10 PM
>> *To:* user@ambari.apache.org
>> *Subject:* Re: Posting Metrics to Ambari
>>
>>  Quick update, I was able to connect with the phoenix 4.2.2 client and I
>> did get results querying with:
>> SELECT * from METRIC_RECORD WHERE METRIC_NAME =
>> 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;
>>
>>  Now that I know the metrics are posting, I am less concerned about
>> querying through the REST API.
>>
>>  Is there any way to get a custom metric added to the main page of
>> Ambari? or does this require development work?
>>
>>  Thanks,
>>
>>  Bryan
>>
>> On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com> wrote:
>>
>>> Hi Sid,
>>>
>>>  Thanks for the suggestions. I turned on DEBUG for the metrics
>>> collector (had to do this through the Ambari UI configs section) and now I
>>> can see some activity... When I post a metric I see:
>>>
>>>  01:30:18,372 DEBUG [95266635@qtp-171166092-2 -
>>> /ws/v1/timeline/metrics] TimelineWebServices:270 - Storing metrics: {
>>>
>>>   "metrics" : [ {
>>>
>>>     "timestamp" : 1432075898000,
>>>
>>>     "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",
>>>
>>>     "appid" : "amssmoketestfake",
>>>
>>>     "hostname" : "localhost",
>>>
>>>     "starttime" : 1432075898000,
>>>
>>>     "metrics" : {
>>>
>>>       "1432075898000" : 0.963781711428,
>>>
>>>       "1432075899000" : 1.432075898E12
>>>
>>>     }
>>>
>>>   } ]
>>>
>>> }
>>>
>>> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
>>> DefaultPhoenixDataSource:67 - Metric store connection url:
>>> jdbc:phoenix:localhost:61181:/hbase
>>>
>>> 01:30:18,376 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
>>> MutationState:361 - Sending 2 mutations for METRIC_RECORD with 8 key values
>>> of total size 925 bytes
>>>
>>> 01:30:18,380 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
>>> MutationState:436 - Total time for batch call of  2 mutations into
>>> METRIC_RECORD: 3 ms
>>>
>>> 01:30:18,381 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
>>> log:40 - RESPONSE /ws/v1/timeline/metrics  200
>>>
>>>
>>>  So it looks like it posted successfully. Then I hit:
>>>
>>>
>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric
>>>
>>> and I see...
>>>
>>> 01:31:16,952 DEBUG [95266635@qtp-171166092-2 -
>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>> ParallelIterators:412 - Guideposts: ]
>>>
>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>> ParallelIterators:481 - The parallelScans:
>>> [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]
>>>
>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>> BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1,
>>> count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]
>>>
>>> 01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id:
>>> d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan:
>>> {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}
>>>
>>> 01:31:16,959 DEBUG [95266635@qtp-171166092-2 -
>>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>>> PhoenixHBaseAccessor:552 - Aggregate records size: 0
>>>
>>> I'll see if I can get the phoenix client working and see what that
>>> returns.
>>>
>>> Thanks,
>>>
>>> Bryan
>>>
>>> On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <swagle@hortonworks.com
>>> > wrote:
>>>
>>>>  Hi Bryan,
>>>>
>>>>
>>>>  Few things you can do:
>>>>
>>>>
>>>>  1. Turn on DEBUG mode by changing log4j.properties at,
>>>> /etc/ambari-metrics-collector/conf/
>>>>
>>>> This might reveal more info, I don't think we print every metrics
>>>> received to the log in 2.0 or 2.1, I did add this option if TRACE is
>>>> enabled to trunk recently.
>>>>
>>>>
>>>>  2. Connect using Phoenix directly and you can do a SELECT query like
>>>> this:
>>>>
>>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>'
>>>> order by SERVER_TIME desc limit 10;
>>>>
>>>>
>>>>  Instructions for connecting to Phoenix:
>>>>
>>>> https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema
>>>>
>>>>
>>>>  3. What API call are you making to get metrics?
>>>>
>>>> E.g.: http://
>>>> <ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>
>>>>
>>>>
>>>>  -Sid
>>>>
>>>>
>>>>  ------------------------------
>>>> *From:* Bryan Bende <bb...@gmail.com>
>>>> *Sent:* Friday, July 24, 2015 2:03 PM
>>>> *To:* user@ambari.apache.org
>>>> *Subject:* Posting Metrics to Ambari
>>>>
>>>>   I'm interested in sending metrics to Ambari and I've been looking at
>>>> the Metrics Collector REST API described here:
>>>>
>>>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>>>>
>>>>  I figured the easiest way to test it would be to get the latest HDP
>>>> Sandbox... so I downloaded and started it up. The Metrics Collector service
>>>> wasn't running so I started it, and also added port 6188 to the VM port
>>>> forwarding. From there I used the example POST on the Wiki page and made a
>>>> successful POST which got a 200 response. After that I tried the query, but
>>>> could never get any results to come back.
>>>>
>>>>  I know this list is not specific to HDP, but I was wondering if
>>>> anyone has any suggestions as to what I can look at to figure out what is
>>>> happening with the data I am posting.
>>>>
>>>>  I was watching the metrics collector log while posting and querying
>>>> and didn't see any activity besides the periodic aggregation.
>>>>
>>>>  Any suggestions would be greatly appreciated.
>>>>
>>>>  Thanks,
>>>>
>>>>  Bran
>>>>
>>>
>>>
>>
>

Re: Posting Metrics to Ambari

Posted by Jaimin Jetly <ja...@hortonworks.com>.
Hi Bryan,


There are 2 steps in this that needs to be achieved.


STEP-1:  Exposing service metrics successfully through Ambari REST API

STEP-2:  Ambari UI displaying widgets comprised from newly exposed metrics via Ambari server.



As step-1 is pre-requisite to step-2, can you confirm that you were able to achieve step-1 (exposing service metrics successfully through Ambari REST API) ?


NOTE: /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json are the metrics specific to Ambari metrics service. If the new metrics that you want to expose are related to any other service then please edit/create metrics.json file in that specific service package and not in Ambari metrics service package. widgets.json also needs to be changed/added in the same service package and not at /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json (unless you want to add system heatmaps for a stack that inherits HDP-2.0.6 stack).



-- Thanks

    Jaimin

________________________________
From: Bryan Bende <bb...@gmail.com>
Sent: Sunday, July 26, 2015 2:10 PM
To: user@ambari.apache.org
Subject: Re: Posting Metrics to Ambari


Hi Sid,

Thanks for the pointers about how to add a metric to the UI. Based on those instructions I modified /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json and added the following based on the test metrics I posted:

"metrics/SmokeTest/FakeMetric": {

              "metric": "AMBARI_METRICS.SmokeTest.FakeMetric",

              "pointInTime": true,

              "temporal": true

            }

>From digging around the filesystem there appears to be a widgets.json in /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json. It looks like this file only contained the definitions of the heatmaps, so I wasn't sure if this was the right place, but just to see what happened I modified it as follows:

1) Added a whole new layout:

http://pastebin.com/KqeT8xfe

2) Added a heatmap for the test metric:

http://pastebin.com/AQDT7u6v

Then I restarted the HDP VM but I don't see anything in the UI under Metric Actions -> Add, or under Heatmaps. Anything that seems completely wrong about what I did? Maybe I should be going down the route of defining a new service type for system I will be sending metrics from?

Sorry to keep bothering with all these questions, I just don't have any previous experience with Ambari.

Thanks,

Bryan

On Sun, Jul 26, 2015 at 12:10 AM, Siddharth Wagle <sw...@hortonworks.com>> wrote:

The AMS API does not allow open ended queries so startTime and endTime are required fields, the curl call should return the error code with the apt response.


If this doesn't happen please go ahead and file a Jira.


Using AMS through Ambari UI after getting the plumbing work with metrics.json completed would be much easier. The AMS API does need some refinement. Jiras / Bugs are welcome.


-Sid



________________________________
From: Siddharth Wagle <sw...@hortonworks.com>>
Sent: Saturday, July 25, 2015 9:01 PM

To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Posting Metrics to Ambari


No dev work need only need to modify metrics.json file and then add widget from UI.


Stack details:

https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics


UI specifics:

https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>>
Sent: Saturday, July 25, 2015 7:10 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Posting Metrics to Ambari

Quick update, I was able to connect with the phoenix 4.2.2 client and I did get results querying with:
SELECT * from METRIC_RECORD WHERE METRIC_NAME = 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;

Now that I know the metrics are posting, I am less concerned about querying through the REST API.

Is there any way to get a custom metric added to the main page of Ambari? or does this require development work?

Thanks,

Bryan

On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com>> wrote:
Hi Sid,

Thanks for the suggestions. I turned on DEBUG for the metrics collector (had to do this through the Ambari UI configs section) and now I can see some activity... When I post a metric I see:


01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] TimelineWebServices:270 - Storing metrics: {

  "metrics" : [ {

    "timestamp" : 1432075898000,

    "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",

    "appid" : "amssmoketestfake",

    "hostname" : "localhost",

    "starttime" : 1432075898000,

    "metrics" : {

      "1432075898000" : 0.963781711428,

      "1432075899000" : 1.432075898E12

    }

  } ]

}

01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] DefaultPhoenixDataSource:67 - Metric store connection url: jdbc:phoenix:localhost:61181:/hbase

01:30:18,376 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] MutationState:361 - Sending 2 mutations for METRIC_RECORD with 8 key values of total size 925 bytes

01:30:18,380 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] MutationState:436 - Total time for batch call of  2 mutations into METRIC_RECORD: 3 ms

01:30:18,381 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] log:40 - RESPONSE /ws/v1/timeline/metrics  200


So it looks like it posted successfully. Then I hit:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric

and I see...

01:31:16,952 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:412 - Guideposts: ]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:481 - The parallelScans: [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1, count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]

01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id: d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan: {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}

01:31:16,959 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] PhoenixHBaseAccessor:552 - Aggregate records size: 0

I'll see if I can get the phoenix client working and see what that returns.

Thanks,

Bryan

On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <sw...@hortonworks.com>> wrote:

Hi Bryan,


Few things you can do:


1. Turn on DEBUG mode by changing log4j.properties at, /etc/ambari-metrics-collector/conf/

This might reveal more info, I don't think we print every metrics received to the log in 2.0 or 2.1, I did add this option if TRACE is enabled to trunk recently.


2. Connect using Phoenix directly and you can do a SELECT query like this:

SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>' order by SERVER_TIME desc limit 10;


Instructions for connecting to Phoenix:

https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema


3. What API call are you making to get metrics?

E.g.: http://<ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>>
Sent: Friday, July 24, 2015 2:03 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Posting Metrics to Ambari

I'm interested in sending metrics to Ambari and I've been looking at the Metrics Collector REST API described here:
https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification

I figured the easiest way to test it would be to get the latest HDP Sandbox... so I downloaded and started it up. The Metrics Collector service wasn't running so I started it, and also added port 6188 to the VM port forwarding. From there I used the example POST on the Wiki page and made a successful POST which got a 200 response. After that I tried the query, but could never get any results to come back.

I know this list is not specific to HDP, but I was wondering if anyone has any suggestions as to what I can look at to figure out what is happening with the data I am posting.

I was watching the metrics collector log while posting and querying and didn't see any activity besides the periodic aggregation.

Any suggestions would be greatly appreciated.

Thanks,

Bran




Re: Posting Metrics to Ambari

Posted by Bryan Bende <bb...@gmail.com>.
Hi Sid,

Thanks for the pointers about how to add a metric to the UI. Based on those
instructions I modified
/var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/metrics.json
and added the following based on the test metrics I posted:

"metrics/SmokeTest/FakeMetric": {

              "metric": "AMBARI_METRICS.SmokeTest.FakeMetric",

              "pointInTime": true,

              "temporal": true

            }

>From digging around the filesystem there appears to be a widgets.json in
/var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json. It looks
like this file only contained the definitions of the heatmaps, so I wasn't
sure if this was the right place, but just to see what happened I modified
it as follows:

1) Added a whole new layout:

http://pastebin.com/KqeT8xfe

2) Added a heatmap for the test metric:

http://pastebin.com/AQDT7u6v

Then I restarted the HDP VM but I don't see anything in the UI under Metric
Actions -> Add, or under Heatmaps. Anything that seems completely wrong
about what I did? Maybe I should be going down the route of defining a new
service type for system I will be sending metrics from?

Sorry to keep bothering with all these questions, I just don't have any
previous experience with Ambari.

Thanks,

Bryan

On Sun, Jul 26, 2015 at 12:10 AM, Siddharth Wagle <sw...@hortonworks.com>
wrote:

>  The AMS API does not allow open ended queries so startTime and endTime
> are required fields, the curl call should return the error code with the
> apt response.
>
>
>  If this doesn't happen please go ahead and file a Jira.
>
>
>  Using AMS through Ambari UI after getting the plumbing work with
> metrics.json completed would be much easier. The AMS API does need some
> refinement. Jiras / Bugs are welcome.
>
>
>  -Sid
>
>
>
>  ------------------------------
> *From:* Siddharth Wagle <sw...@hortonworks.com>
> *Sent:* Saturday, July 25, 2015 9:01 PM
>
> *To:* user@ambari.apache.org
> *Subject:* Re: Posting Metrics to Ambari
>
>
> No dev work need only need to modify metrics.json file and then add widget
> from UI.
>
>
>  Stack details:
>
> https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics
>
>
>  UI specifics:
>
>
> https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard
>
>
>  -Sid
>
>
>  ------------------------------
> *From:* Bryan Bende <bb...@gmail.com>
> *Sent:* Saturday, July 25, 2015 7:10 PM
> *To:* user@ambari.apache.org
> *Subject:* Re: Posting Metrics to Ambari
>
>  Quick update, I was able to connect with the phoenix 4.2.2 client and I
> did get results querying with:
> SELECT * from METRIC_RECORD WHERE METRIC_NAME =
> 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;
>
>  Now that I know the metrics are posting, I am less concerned about
> querying through the REST API.
>
>  Is there any way to get a custom metric added to the main page of
> Ambari? or does this require development work?
>
>  Thanks,
>
>  Bryan
>
> On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com> wrote:
>
>> Hi Sid,
>>
>>  Thanks for the suggestions. I turned on DEBUG for the metrics collector
>> (had to do this through the Ambari UI configs section) and now I can see
>> some activity... When I post a metric I see:
>>
>>  01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
>> TimelineWebServices:270 - Storing metrics: {
>>
>>   "metrics" : [ {
>>
>>     "timestamp" : 1432075898000,
>>
>>     "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",
>>
>>     "appid" : "amssmoketestfake",
>>
>>     "hostname" : "localhost",
>>
>>     "starttime" : 1432075898000,
>>
>>     "metrics" : {
>>
>>       "1432075898000" : 0.963781711428,
>>
>>       "1432075899000" : 1.432075898E12
>>
>>     }
>>
>>   } ]
>>
>> }
>>
>> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
>> DefaultPhoenixDataSource:67 - Metric store connection url:
>> jdbc:phoenix:localhost:61181:/hbase
>>
>> 01:30:18,376 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
>> MutationState:361 - Sending 2 mutations for METRIC_RECORD with 8 key values
>> of total size 925 bytes
>>
>> 01:30:18,380 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
>> MutationState:436 - Total time for batch call of  2 mutations into
>> METRIC_RECORD: 3 ms
>>
>> 01:30:18,381 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
>> log:40 - RESPONSE /ws/v1/timeline/metrics  200
>>
>>
>>  So it looks like it posted successfully. Then I hit:
>>
>>
>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric
>>
>> and I see...
>>
>> 01:31:16,952 DEBUG [95266635@qtp-171166092-2 -
>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>> ParallelIterators:412 - Guideposts: ]
>>
>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>> ParallelIterators:481 - The parallelScans:
>> [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]
>>
>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>> BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1,
>> count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]
>>
>> 01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id:
>> d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan:
>> {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}
>>
>> 01:31:16,959 DEBUG [95266635@qtp-171166092-2 -
>> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
>> PhoenixHBaseAccessor:552 - Aggregate records size: 0
>>
>> I'll see if I can get the phoenix client working and see what that
>> returns.
>>
>> Thanks,
>>
>> Bryan
>>
>> On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <sw...@hortonworks.com>
>> wrote:
>>
>>>  Hi Bryan,
>>>
>>>
>>>  Few things you can do:
>>>
>>>
>>>  1. Turn on DEBUG mode by changing log4j.properties at,
>>> /etc/ambari-metrics-collector/conf/
>>>
>>> This might reveal more info, I don't think we print every metrics
>>> received to the log in 2.0 or 2.1, I did add this option if TRACE is
>>> enabled to trunk recently.
>>>
>>>
>>>  2. Connect using Phoenix directly and you can do a SELECT query like
>>> this:
>>>
>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>'
>>> order by SERVER_TIME desc limit 10;
>>>
>>>
>>>  Instructions for connecting to Phoenix:
>>>
>>> https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema
>>>
>>>
>>>  3. What API call are you making to get metrics?
>>>
>>> E.g.: http://
>>> <ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>
>>>
>>>
>>>  -Sid
>>>
>>>
>>>  ------------------------------
>>> *From:* Bryan Bende <bb...@gmail.com>
>>> *Sent:* Friday, July 24, 2015 2:03 PM
>>> *To:* user@ambari.apache.org
>>> *Subject:* Posting Metrics to Ambari
>>>
>>>   I'm interested in sending metrics to Ambari and I've been looking at
>>> the Metrics Collector REST API described here:
>>>
>>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>>>
>>>  I figured the easiest way to test it would be to get the latest HDP
>>> Sandbox... so I downloaded and started it up. The Metrics Collector service
>>> wasn't running so I started it, and also added port 6188 to the VM port
>>> forwarding. From there I used the example POST on the Wiki page and made a
>>> successful POST which got a 200 response. After that I tried the query, but
>>> could never get any results to come back.
>>>
>>>  I know this list is not specific to HDP, but I was wondering if anyone
>>> has any suggestions as to what I can look at to figure out what is
>>> happening with the data I am posting.
>>>
>>>  I was watching the metrics collector log while posting and querying
>>> and didn't see any activity besides the periodic aggregation.
>>>
>>>  Any suggestions would be greatly appreciated.
>>>
>>>  Thanks,
>>>
>>>  Bran
>>>
>>
>>
>

Re: Posting Metrics to Ambari

Posted by Siddharth Wagle <sw...@hortonworks.com>.
The AMS API does not allow open ended queries so startTime and endTime are required fields, the curl call should return the error code with the apt response.


If this doesn't happen please go ahead and file a Jira.


Using AMS through Ambari UI after getting the plumbing work with metrics.json completed would be much easier. The AMS API does need some refinement. Jiras / Bugs are welcome.


-Sid



________________________________
From: Siddharth Wagle <sw...@hortonworks.com>
Sent: Saturday, July 25, 2015 9:01 PM
To: user@ambari.apache.org
Subject: Re: Posting Metrics to Ambari


No dev work need only need to modify metrics.json file and then add widget from UI.


Stack details:

https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics


UI specifics:

https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>
Sent: Saturday, July 25, 2015 7:10 PM
To: user@ambari.apache.org
Subject: Re: Posting Metrics to Ambari

Quick update, I was able to connect with the phoenix 4.2.2 client and I did get results querying with:
SELECT * from METRIC_RECORD WHERE METRIC_NAME = 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;

Now that I know the metrics are posting, I am less concerned about querying through the REST API.

Is there any way to get a custom metric added to the main page of Ambari? or does this require development work?

Thanks,

Bryan

On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com>> wrote:
Hi Sid,

Thanks for the suggestions. I turned on DEBUG for the metrics collector (had to do this through the Ambari UI configs section) and now I can see some activity... When I post a metric I see:


01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] TimelineWebServices:270 - Storing metrics: {

  "metrics" : [ {

    "timestamp" : 1432075898000,

    "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",

    "appid" : "amssmoketestfake",

    "hostname" : "localhost",

    "starttime" : 1432075898000,

    "metrics" : {

      "1432075898000" : 0.963781711428,

      "1432075899000" : 1.432075898E12

    }

  } ]

}

01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] DefaultPhoenixDataSource:67 - Metric store connection url: jdbc:phoenix:localhost:61181:/hbase

01:30:18,376 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] MutationState:361 - Sending 2 mutations for METRIC_RECORD with 8 key values of total size 925 bytes

01:30:18,380 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] MutationState:436 - Total time for batch call of  2 mutations into METRIC_RECORD: 3 ms

01:30:18,381 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] log:40 - RESPONSE /ws/v1/timeline/metrics  200


So it looks like it posted successfully. Then I hit:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric

and I see...

01:31:16,952 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:412 - Guideposts: ]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:481 - The parallelScans: [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1, count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]

01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id: d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan: {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}

01:31:16,959 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] PhoenixHBaseAccessor:552 - Aggregate records size: 0

I'll see if I can get the phoenix client working and see what that returns.

Thanks,

Bryan

On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <sw...@hortonworks.com>> wrote:

Hi Bryan,


Few things you can do:


1. Turn on DEBUG mode by changing log4j.properties at, /etc/ambari-metrics-collector/conf/

This might reveal more info, I don't think we print every metrics received to the log in 2.0 or 2.1, I did add this option if TRACE is enabled to trunk recently.


2. Connect using Phoenix directly and you can do a SELECT query like this:

SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>' order by SERVER_TIME desc limit 10;


Instructions for connecting to Phoenix:

https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema


3. What API call are you making to get metrics?

E.g.: http://<ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>>
Sent: Friday, July 24, 2015 2:03 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Posting Metrics to Ambari

I'm interested in sending metrics to Ambari and I've been looking at the Metrics Collector REST API described here:
https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification

I figured the easiest way to test it would be to get the latest HDP Sandbox... so I downloaded and started it up. The Metrics Collector service wasn't running so I started it, and also added port 6188 to the VM port forwarding. From there I used the example POST on the Wiki page and made a successful POST which got a 200 response. After that I tried the query, but could never get any results to come back.

I know this list is not specific to HDP, but I was wondering if anyone has any suggestions as to what I can look at to figure out what is happening with the data I am posting.

I was watching the metrics collector log while posting and querying and didn't see any activity besides the periodic aggregation.

Any suggestions would be greatly appreciated.

Thanks,

Bran



Re: Posting Metrics to Ambari

Posted by Siddharth Wagle <sw...@hortonworks.com>.
No dev work need only need to modify metrics.json file and then add widget from UI.


Stack details:

https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics


UI specifics:

https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>
Sent: Saturday, July 25, 2015 7:10 PM
To: user@ambari.apache.org
Subject: Re: Posting Metrics to Ambari

Quick update, I was able to connect with the phoenix 4.2.2 client and I did get results querying with:
SELECT * from METRIC_RECORD WHERE METRIC_NAME = 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;

Now that I know the metrics are posting, I am less concerned about querying through the REST API.

Is there any way to get a custom metric added to the main page of Ambari? or does this require development work?

Thanks,

Bryan

On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com>> wrote:
Hi Sid,

Thanks for the suggestions. I turned on DEBUG for the metrics collector (had to do this through the Ambari UI configs section) and now I can see some activity... When I post a metric I see:


01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] TimelineWebServices:270 - Storing metrics: {

  "metrics" : [ {

    "timestamp" : 1432075898000,

    "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",

    "appid" : "amssmoketestfake",

    "hostname" : "localhost",

    "starttime" : 1432075898000,

    "metrics" : {

      "1432075898000" : 0.963781711428,

      "1432075899000" : 1.432075898E12

    }

  } ]

}

01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] DefaultPhoenixDataSource:67 - Metric store connection url: jdbc:phoenix:localhost:61181:/hbase

01:30:18,376 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] MutationState:361 - Sending 2 mutations for METRIC_RECORD with 8 key values of total size 925 bytes

01:30:18,380 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] MutationState:436 - Total time for batch call of  2 mutations into METRIC_RECORD: 3 ms

01:30:18,381 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] log:40 - RESPONSE /ws/v1/timeline/metrics  200


So it looks like it posted successfully. Then I hit:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric

and I see...

01:31:16,952 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:412 - Guideposts: ]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:481 - The parallelScans: [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1, count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]

01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id: d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan: {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}

01:31:16,959 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric] PhoenixHBaseAccessor:552 - Aggregate records size: 0

I'll see if I can get the phoenix client working and see what that returns.

Thanks,

Bryan

On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <sw...@hortonworks.com>> wrote:

Hi Bryan,


Few things you can do:


1. Turn on DEBUG mode by changing log4j.properties at, /etc/ambari-metrics-collector/conf/

This might reveal more info, I don't think we print every metrics received to the log in 2.0 or 2.1, I did add this option if TRACE is enabled to trunk recently.


2. Connect using Phoenix directly and you can do a SELECT query like this:

SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>' order by SERVER_TIME desc limit 10;


Instructions for connecting to Phoenix:

https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema


3. What API call are you making to get metrics?

E.g.: http://<ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>>
Sent: Friday, July 24, 2015 2:03 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Posting Metrics to Ambari

I'm interested in sending metrics to Ambari and I've been looking at the Metrics Collector REST API described here:
https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification

I figured the easiest way to test it would be to get the latest HDP Sandbox... so I downloaded and started it up. The Metrics Collector service wasn't running so I started it, and also added port 6188 to the VM port forwarding. From there I used the example POST on the Wiki page and made a successful POST which got a 200 response. After that I tried the query, but could never get any results to come back.

I know this list is not specific to HDP, but I was wondering if anyone has any suggestions as to what I can look at to figure out what is happening with the data I am posting.

I was watching the metrics collector log while posting and querying and didn't see any activity besides the periodic aggregation.

Any suggestions would be greatly appreciated.

Thanks,

Bran



Re: Posting Metrics to Ambari

Posted by Bryan Bende <bb...@gmail.com>.
Quick update, I was able to connect with the phoenix 4.2.2 client and I did
get results querying with:
SELECT * from METRIC_RECORD WHERE METRIC_NAME =
'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;

Now that I know the metrics are posting, I am less concerned about querying
through the REST API.

Is there any way to get a custom metric added to the main page of Ambari?
or does this require development work?

Thanks,

Bryan

On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bb...@gmail.com> wrote:

> Hi Sid,
>
> Thanks for the suggestions. I turned on DEBUG for the metrics collector
> (had to do this through the Ambari UI configs section) and now I can see
> some activity... When I post a metric I see:
>
> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
> TimelineWebServices:270 - Storing metrics: {
>
>   "metrics" : [ {
>
>     "timestamp" : 1432075898000,
>
>     "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",
>
>     "appid" : "amssmoketestfake",
>
>     "hostname" : "localhost",
>
>     "starttime" : 1432075898000,
>
>     "metrics" : {
>
>       "1432075898000" : 0.963781711428,
>
>       "1432075899000" : 1.432075898E12
>
>     }
>
>   } ]
>
> }
>
> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
> DefaultPhoenixDataSource:67 - Metric store connection url:
> jdbc:phoenix:localhost:61181:/hbase
>
> 01:30:18,376 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
> MutationState:361 - Sending 2 mutations for METRIC_RECORD with 8 key values
> of total size 925 bytes
>
> 01:30:18,380 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
> MutationState:436 - Total time for batch call of  2 mutations into
> METRIC_RECORD: 3 ms
>
> 01:30:18,381 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
> log:40 - RESPONSE /ws/v1/timeline/metrics  200
>
>
> So it looks like it posted successfully. Then I hit:
>
>
> http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric
>
> and I see...
>
> 01:31:16,952 DEBUG [95266635@qtp-171166092-2 -
> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
> ParallelIterators:412 - Guideposts: ]
>
> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
> ParallelIterators:481 - The parallelScans:
> [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]
>
> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
> BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1,
> count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]
>
> 01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id:
> d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan:
> {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}
>
> 01:31:16,959 DEBUG [95266635@qtp-171166092-2 -
> /ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
> PhoenixHBaseAccessor:552 - Aggregate records size: 0
>
> I'll see if I can get the phoenix client working and see what that returns.
>
> Thanks,
>
> Bryan
>
> On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <sw...@hortonworks.com>
> wrote:
>
>>  Hi Bryan,
>>
>>
>>  Few things you can do:
>>
>>
>>  1. Turn on DEBUG mode by changing log4j.properties at,
>> /etc/ambari-metrics-collector/conf/
>>
>> This might reveal more info, I don't think we print every metrics
>> received to the log in 2.0 or 2.1, I did add this option if TRACE is
>> enabled to trunk recently.
>>
>>
>>  2. Connect using Phoenix directly and you can do a SELECT query like
>> this:
>>
>> SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>'
>> order by SERVER_TIME desc limit 10;
>>
>>
>>  Instructions for connecting to Phoenix:
>>
>> https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema
>>
>>
>>  3. What API call are you making to get metrics?
>>
>> E.g.: http://
>> <ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>
>>
>>
>>  -Sid
>>
>>
>>  ------------------------------
>> *From:* Bryan Bende <bb...@gmail.com>
>> *Sent:* Friday, July 24, 2015 2:03 PM
>> *To:* user@ambari.apache.org
>> *Subject:* Posting Metrics to Ambari
>>
>>  I'm interested in sending metrics to Ambari and I've been looking at
>> the Metrics Collector REST API described here:
>>
>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>>
>>  I figured the easiest way to test it would be to get the latest HDP
>> Sandbox... so I downloaded and started it up. The Metrics Collector service
>> wasn't running so I started it, and also added port 6188 to the VM port
>> forwarding. From there I used the example POST on the Wiki page and made a
>> successful POST which got a 200 response. After that I tried the query, but
>> could never get any results to come back.
>>
>>  I know this list is not specific to HDP, but I was wondering if anyone
>> has any suggestions as to what I can look at to figure out what is
>> happening with the data I am posting.
>>
>>  I was watching the metrics collector log while posting and querying and
>> didn't see any activity besides the periodic aggregation.
>>
>>  Any suggestions would be greatly appreciated.
>>
>>  Thanks,
>>
>>  Bran
>>
>
>

Re: Posting Metrics to Ambari

Posted by Bryan Bende <bb...@gmail.com>.
Hi Sid,

Thanks for the suggestions. I turned on DEBUG for the metrics collector
(had to do this through the Ambari UI configs section) and now I can see
some activity... When I post a metric I see:

01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
TimelineWebServices:270 - Storing metrics: {

  "metrics" : [ {

    "timestamp" : 1432075898000,

    "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric",

    "appid" : "amssmoketestfake",

    "hostname" : "localhost",

    "starttime" : 1432075898000,

    "metrics" : {

      "1432075898000" : 0.963781711428,

      "1432075899000" : 1.432075898E12

    }

  } ]

}

01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
DefaultPhoenixDataSource:67 - Metric store connection url:
jdbc:phoenix:localhost:61181:/hbase

01:30:18,376 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
MutationState:361 - Sending 2 mutations for METRIC_RECORD with 8 key values
of total size 925 bytes

01:30:18,380 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
MutationState:436 - Total time for batch call of  2 mutations into
METRIC_RECORD: 3 ms

01:30:18,381 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics]
log:40 - RESPONSE /ws/v1/timeline/metrics  200


So it looks like it posted successfully. Then I hit:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric

and I see...

01:31:16,952 DEBUG [95266635@qtp-171166092-2 -
/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
ParallelIterators:412 - Guideposts: ]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
ParallelIterators:481 - The parallelScans:
[[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":-1}]]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 -
/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [limit=1,
count=0, orderByColumns=[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=, ptr2=]

01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - Id:
d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan:
{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","caching":10000}

01:31:16,959 DEBUG [95266635@qtp-171166092-2 -
/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric]
PhoenixHBaseAccessor:552 - Aggregate records size: 0

I'll see if I can get the phoenix client working and see what that returns.

Thanks,

Bryan

On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle <sw...@hortonworks.com>
wrote:

>  Hi Bryan,
>
>
>  Few things you can do:
>
>
>  1. Turn on DEBUG mode by changing log4j.properties at,
> /etc/ambari-metrics-collector/conf/
>
> This might reveal more info, I don't think we print every metrics received
> to the log in 2.0 or 2.1, I did add this option if TRACE is enabled to
> trunk recently.
>
>
>  2. Connect using Phoenix directly and you can do a SELECT query like
> this:
>
> SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>' order
> by SERVER_TIME desc limit 10;
>
>
>  Instructions for connecting to Phoenix:
>
> https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema
>
>
>  3. What API call are you making to get metrics?
>
> E.g.: http://
> <ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>
>
>
>  -Sid
>
>
>  ------------------------------
> *From:* Bryan Bende <bb...@gmail.com>
> *Sent:* Friday, July 24, 2015 2:03 PM
> *To:* user@ambari.apache.org
> *Subject:* Posting Metrics to Ambari
>
>  I'm interested in sending metrics to Ambari and I've been looking at the
> Metrics Collector REST API described here:
>
> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
>
>  I figured the easiest way to test it would be to get the latest HDP
> Sandbox... so I downloaded and started it up. The Metrics Collector service
> wasn't running so I started it, and also added port 6188 to the VM port
> forwarding. From there I used the example POST on the Wiki page and made a
> successful POST which got a 200 response. After that I tried the query, but
> could never get any results to come back.
>
>  I know this list is not specific to HDP, but I was wondering if anyone
> has any suggestions as to what I can look at to figure out what is
> happening with the data I am posting.
>
>  I was watching the metrics collector log while posting and querying and
> didn't see any activity besides the periodic aggregation.
>
>  Any suggestions would be greatly appreciated.
>
>  Thanks,
>
>  Bran
>

Re: Posting Metrics to Ambari

Posted by Siddharth Wagle <sw...@hortonworks.com>.
Hi Bryan,


Few things you can do:


1. Turn on DEBUG mode by changing log4j.properties at, /etc/ambari-metrics-collector/conf/

This might reveal more info, I don't think we print every metrics received to the log in 2.0 or 2.1, I did add this option if TRACE is enabled to trunk recently.


2. Connect using Phoenix directly and you can do a SELECT query like this:

SELECT * from METRIC_RECORD WHERE METRIC_NAME = '<your-metric-name>' order by SERVER_TIME desc limit 10;


Instructions for connecting to Phoenix:

https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema


3. What API call are you making to get metrics?

E.g.: http://<ams-collector>:6188/ws/v1/timeline/metrics?metricNames=<your-metric-name>&startTime=<epoch>&endTime=<epoch>&hostname=<hostname>


-Sid


________________________________
From: Bryan Bende <bb...@gmail.com>
Sent: Friday, July 24, 2015 2:03 PM
To: user@ambari.apache.org
Subject: Posting Metrics to Ambari

I'm interested in sending metrics to Ambari and I've been looking at the Metrics Collector REST API described here:
https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification

I figured the easiest way to test it would be to get the latest HDP Sandbox... so I downloaded and started it up. The Metrics Collector service wasn't running so I started it, and also added port 6188 to the VM port forwarding. From there I used the example POST on the Wiki page and made a successful POST which got a 200 response. After that I tried the query, but could never get any results to come back.

I know this list is not specific to HDP, but I was wondering if anyone has any suggestions as to what I can look at to figure out what is happening with the data I am posting.

I was watching the metrics collector log while posting and querying and didn't see any activity besides the periodic aggregation.

Any suggestions would be greatly appreciated.

Thanks,

Bran